|
Records |
Links |
|
Author |
David Geronimo; Antonio Lopez; Angel Sappa; Thorsten Graf |
|
|
Title |
Survey on Pedestrian Detection for Advanced Driver Assistance Systems |
Type |
Journal Article |
|
Year |
2010 |
Publication |
IEEE Transaction on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
32 |
Issue |
7 |
Pages |
1239–1258 |
|
|
Keywords |
ADAS, pedestrian detection, on-board vision, survey |
|
|
Abstract |
Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one-after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0162-8828 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ GLS2010 |
Serial |
1340 |
|
Permanent link to this record |
|
|
|
|
Author |
David Geronimo; Angel Sappa; Antonio Lopez; Daniel Ponsa |
|
|
Title |
Pedestrian Detection Using AdaBoost Learning of Features and Vehicle Pitch Estimation |
Type |
Miscellaneous |
|
Year |
2006 |
Publication |
6th IASTED International Conference on Visualization, Imaging and Image Processing |
Abbreviated Journal |
VIIP |
|
|
Volume |
|
Issue |
|
Pages |
400–405 |
|
|
Keywords |
ADAS, pedestrian detection, adaboost learning, pitch estimation, haar wavelets, edge orientation histograms. |
|
|
Abstract |
In this paper we propose a combination of different Haar filter sets and Edge Orientation Histograms (EOH) in order to learn a model for pedestrian detection. As we will show, with the addition of EOH we obtain better ROCs than using Haar filters alone. Hence, a model consisting of discriminant features, selected by AdaBoost, is applied at pedestrian-sized image windows in order to perform
the classification. Additionally, taking into account the final application, a driver assistance system with realtime requirements, we propose a novel stereo-based camera pitch estimation to reduce the number of explored windows.
With this approach, the system can work in urban roads, as will be illustrated by current results. |
|
|
Address |
Palma de Mallorca (Spain) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ GSL2006 |
Serial |
672 |
|
Permanent link to this record |
|
|
|
|
Author |
Q. Xue; Laura Igual; A. Berenguel; M. Guerrieri; L. Garrido |
|
|
Title |
Active Contour Segmentation with Affine Coordinate-Based Parametrization |
Type |
Conference Article |
|
Year |
2014 |
Publication |
9th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
|
Pages |
5-14 |
|
|
Keywords |
Active Contours; Affine Coordinates; Mean Value Coordinates |
|
|
Abstract |
In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments. |
|
|
Address |
Lisboa; January 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
OR;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ XIB2014 |
Serial |
2452 |
|
Permanent link to this record |
|
|
|
|
Author |
Albert Andaluz |
|
|
Title |
LV Contour Segmentation in TMR images using Semantic Description of Tissue and Prior Knowledge Correction |
Type |
Report |
|
Year |
2009 |
Publication |
CVC Technical Report |
Abbreviated Journal |
|
|
|
Volume |
142 |
Issue |
|
Pages |
|
|
|
Keywords |
Active Contour Models; Snakes; Active Shape Models; Deformable Templates; Left Ventricle Segmentation; Generalized Orthogonal Procrustes Analysis; Harmonic Phase Flow; Principal Component Analysis; Tagged Magnetic Resonance |
|
|
Abstract |
The Diagnosis of Left Ventricle (LV) pathologies is related to regional wall motion analysis. Health indicator scores such as the rotation and the torsion are useful for the diagnose of the Left Ventricle (LV) function. However, this requires proper identification of LV segments. On one hand, manual segmentation is robust, but it is slow and requires medical expertise. On the other hand, the tag pattern in Tagged Magnetic Resonance (TMR) sequences is a problem for the automatic segmentation of the LV boundaries. Consequently, we propose a method based in the classical formulation of parametric Snakes, combined with Active Shape models. Our semantic definition of the LV is tagged tissue that experiences motion in the systolic cycle. This defines two energy potentials for the Snake convergence. Additionally, the mean shape corrects excessive deviation from the anatomical shape. We have validated our approach in 15 healthy volunteers and two short axis cuts. In this way, we have compared the automatic segmentations to manual shapes outlined by medical experts. Also, we have explored the accuracy of clinical scores computed using automatic contours. The results show minor divergence in the approximation and the manual segmentations as well as robust computation of clinical scores in all cases. From this we conclude that the proposed method is a promising support tool for clinical analysis. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
Bellaterra 08193, Barcelona, Spain |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ And2009 |
Serial |
1667 |
|
Permanent link to this record |
|
|
|
|
Author |
Swathikiran Sudhakaran; Sergio Escalera; Oswald Lanz |
|
|
Title |
Gate-Shift-Fuse for Video Action Recognition |
Type |
Journal Article |
|
Year |
2023 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
45 |
Issue |
9 |
Pages |
10913-10928 |
|
|
Keywords |
Action Recognition; Video Classification; Spatial Gating; Channel Fusion |
|
|
Abstract |
Convolutional Neural Networks are the de facto models for image recognition. However 3D CNNs, the straight forward extension of 2D CNNs for video recognition, have not achieved the same success on standard action recognition benchmarks. One of the main reasons for this reduced performance of 3D CNNs is the increased computational complexity requiring large scale annotated datasets to train them in scale. 3D kernel factorization approaches have been proposed to reduce the complexity of 3D CNNs. Existing kernel factorization approaches follow hand-designed and hard-wired techniques. In this paper we propose Gate-Shift-Fuse (GSF), a novel spatio-temporal feature extraction module which controls interactions in spatio-temporal decomposition and learns to adaptively route features through time and combine them in a data dependent manner. GSF leverages grouped spatial gating to decompose input tensor and channel weighting to fuse the decomposed tensors. GSF can be inserted into existing 2D CNNs to convert them into an efficient and high performing spatio-temporal feature extractor, with negligible parameter and compute overhead. We perform an extensive analysis of GSF using two popular 2D CNN families and achieve state-of-the-art or competitive performance on five standard action recognition benchmarks. |
|
|
Address |
1 Sept. 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ SEL2023 |
Serial |
3814 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen |
|
|
Title |
Deep semantic pyramids for human attributes and action recognition |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 |
Abbreviated Journal |
|
|
|
Volume |
9127 |
Issue |
|
Pages |
341-353 |
|
|
Keywords |
Action recognition; Human attributes; Semantic pyramids |
|
|
Abstract |
Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature. |
|
|
Address |
Denmark; Copenhagen; June 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-319-19664-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SCIA |
|
|
Notes |
LAMP; 600.068; 600.079;ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ KRW2015b |
Serial |
2672 |
|
Permanent link to this record |
|
|
|
|
Author |
Maryam Asadi-Aghbolaghi; Albert Clapes; Marco Bellantonio; Hugo Jair Escalante; Victor Ponce; Xavier Baro; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera |
|
|
Title |
Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey |
Type |
Book Chapter |
|
Year |
2017 |
Publication |
Gesture Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
539-578 |
|
|
Keywords |
Action recognition; Gesture recognition; Deep learning architectures; Fusion strategies |
|
|
Abstract |
Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ ACB2017a |
Serial |
2981 |
|
Permanent link to this record |
|
|
|
|
Author |
Thomas B. Moeslund; Sergio Escalera; Gholamreza Anbarjafari; Kamal Nasrollahi; Jun Wan |
|
|
Title |
Statistical Machine Learning for Human Behaviour Analysis |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Entropy |
Abbreviated Journal |
ENTROPY |
|
|
Volume |
25 |
Issue |
5 |
Pages |
530 |
|
|
Keywords |
action recognition; emotion recognition; privacy-aware |
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ MEA2020 |
Serial |
3441 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohamed Ilyes Lakhal; Albert Clapes; Sergio Escalera; Oswald Lanz; Andrea Cavallaro |
|
|
Title |
Residual Stacked RNNs for Action Recognition |
Type |
Conference Article |
|
Year |
2018 |
Publication |
9th International Workshop on Human Behavior Understanding |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
534-548 |
|
|
Keywords |
Action recognition; Deep residual learning; Two-stream RNN |
|
|
Abstract |
Action recognition pipelines that use Recurrent Neural Networks (RNN) are currently 5–10% less accurate than Convolutional Neural Networks (CNN). While most works that use RNNs employ a 2D CNN on each frame to extract descriptors for action recognition, we extract spatiotemporal features from a 3D CNN and then learn the temporal relationship of these descriptors through a stacked residual recurrent neural network (Res-RNN). We introduce for the first time residual learning to counter the degradation problem in multi-layer RNNs, which have been successful for temporal aggregation in two-stream action recognition pipelines. Finally, we use a late fusion strategy to combine RGB and optical flow data of the two-stream Res-RNN. Experimental results show that the proposed pipeline achieves competitive results on UCF-101 and state of-the-art results for RNN-like architectures on the challenging HMDB-51 dataset. |
|
|
Address |
Munich; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ LCE2018b |
Serial |
3206 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen |
|
|
Title |
Top-Down Deep Appearance Attention for Action Recognition |
Type |
Conference Article |
|
Year |
2017 |
Publication |
20th Scandinavian Conference on Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
10269 |
Issue |
|
Pages |
297-309 |
|
|
Keywords |
Action recognition; CNNs; Feature fusion |
|
|
Abstract |
Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches. |
|
|
Address |
Tromso; June 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SCIA |
|
|
Notes |
LAMP; 600.109; 600.068; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RKW2017b |
Serial |
3039 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Andrew Bagdanov; Michael Felsberg; Jorma |
|
|
Title |
Scale coding bag of deep features for human attribute and action recognition |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Machine Vision and Applications |
Abbreviated Journal |
MVAP |
|
|
Volume |
29 |
Issue |
1 |
Pages |
55-71 |
|
|
Keywords |
Action recognition; Attribute recognition; Bag of deep features |
|
|
Abstract |
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.068; 600.079; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KWR2018 |
Serial |
3107 |
|
Permanent link to this record |
|
|
|
|
Author |
T.Chauhan; E.Perales; Kaida Xiao; E.Hird ; Dimosthenis Karatzas; Sophie Wuerger |
|
|
Title |
The achromatic locus: Effect of navigation direction in color space |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Journal of Vision |
Abbreviated Journal |
VSS |
|
|
Volume |
14 (1) |
Issue |
25 |
Pages |
1-11 |
|
|
Keywords |
achromatic; unique hues; color constancy; luminance; color space |
|
|
Abstract |
5Y Impact Factor: 2.99 / 1st (Ophthalmology)
An achromatic stimulus is defined as a patch of light that is devoid of any hue. This is usually achieved by asking observers to adjust the stimulus such that it looks neither red nor green and at the same time neither yellow nor blue. Despite the theoretical and practical importance of the achromatic locus, little is known about the variability in these settings. The main purpose of the current study was to evaluate whether achromatic settings were dependent on the task of the observers, namely the navigation direction in color space. Observers could either adjust the test patch along the two chromatic axes in the CIE u*v* diagram or, alternatively, navigate along the unique-hue lines. Our main result is that the navigation method affects the reliability of these achromatic settings. Observers are able to make more reliable achromatic settings when adjusting the test patch along the directions defined by the four unique hues as opposed to navigating along the main axes in the commonly used CIE u*v* chromaticity plane. This result holds across different ambient viewing conditions (Dark, Daylight, Cool White Fluorescent) and different test luminance levels (5, 20, and 50 cd/m2). The reduced variability in the achromatic settings is consistent with the idea that internal color representations are more aligned with the unique-hue lines than the u* and v* axes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.077 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPX2014 |
Serial |
2418 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Baro; Jialuo Chen; Alicia Fornes; Beata Megyesi |
|
|
Title |
Towards a generic unsupervised method for transcription of encoded manuscripts |
Type |
Conference Article |
|
Year |
2019 |
Publication |
3rd International Conference on Digital Access to Textual Cultural Heritage |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
73-78 |
|
|
Keywords |
A. Baró, J. Chen, A. Fornés, B. Megyesi. |
|
|
Abstract |
Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods. |
|
|
Address |
Brussels; May 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DATeCH |
|
|
Notes |
DAG; 600.097; 600.140; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ BCF2019 |
Serial |
3276 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo |
|
|
Title |
Detailed 3D face reconstruction from a single RGB image |
Type |
Journal |
|
Year |
2019 |
Publication |
Journal of WSCG |
Abbreviated Journal |
JWSCG |
|
|
Volume |
27 |
Issue |
2 |
Pages |
103-112 |
|
|
Keywords |
3D Wrinkle Reconstruction; Face Analysis, Optimization. |
|
|
Abstract |
This paper introduces a method to obtain a detailed 3D reconstruction of facial skin from a single RGB image.
To this end, we propose the exclusive use of an input image without requiring any information about the observed material nor training data to model the wrinkle properties. They are detected and characterized directly from the image via a simple and effective parametric model, determining several features such as location, orientation, width, and height. With these ingredients, we propose to minimize a photometric error to retrieve the final detailed 3D map, which is initialized by current techniques based on deep learning. In contrast with other approaches, we only require estimating a depth parameter, making our approach fast and intuitive. Extensive experimental evaluation is presented in a wide variety of synthetic and real images, including different skin properties and facial
expressions. In all cases, our method outperforms the current approaches regarding 3D reconstruction accuracy, providing striking results for both large and fine wrinkles. |
|
|
Address |
2019/11 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; 600.086; 600.130; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3708 |
|
Permanent link to this record |
|
|
|
|
Author |
Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo |
|
|
Title |
Single view facial hair 3D reconstruction |
Type |
Conference Article |
|
Year |
2019 |
Publication |
9th Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
11867 |
Issue |
|
Pages |
423-436 |
|
|
Keywords |
3D Vision; Shape Reconstruction; Facial Hair Modeling |
|
|
Abstract |
n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches. |
|
|
Address |
Madrid; July 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IbPRIA |
|
|
Notes |
MSIAU; 600.086; 600.130; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3707 |
|
Permanent link to this record |