|
Records |
Links |
|
Author |
Pau Riba; Adria Molina; Lluis Gomez; Oriol Ramos Terrades; Josep Llados |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Learning to Rank Words: Optimizing Ranking Metrics for Word Spotting |
Type |
Conference Article |
|
Year |
2021 |
Publication |
16th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
12822 |
Issue |
|
Pages |
381–395 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we explore and evaluate the use of ranking-based objective functions for learning simultaneously a word string and a word image encoder. We consider retrieval frameworks in which the user expects a retrieval list ranked according to a defined relevance score. In the context of a word spotting problem, the relevance score has been set according to the string edit distance from the query string. We experimentally demonstrate the competitive performance of the proposed model on query-by-string word spotting for both, handwritten and real scene word images. We also provide the results for query-by-example word spotting, although it is not the main focus of this work. |
|
|
Address |
Lausanne; Suissa; September 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.121; 600.140; 110.312 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RMG2021 |
Serial |
3572 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Measuring Interest of Human Dyadic Interactions |
Type |
Conference Article |
|
Year |
2009 |
Publication |
12th International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
202 |
Issue |
|
Pages |
45-54 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we argue that only using behavioural motion information, we are able to predict the interest of observers when looking at face-to-face interactions. We propose a set of movement-related features from body, face, and mouth activity in order to define a set of higher level interaction features, such as stress, activity, speaking engagement, and corporal engagement. Error-Correcting Output Codes framework with an Adaboost base classifier is used to learn to rank the perceived observer's interest in face-to-face interactions. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers. In particular, the learning system shows that stress features have a high predictive power for ranking interest of observers when looking at of face-to-face interactions. |
|
|
Address |
Cardona (Spain) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-60750-061-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes |
OR;MILAB;HuPBA;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EPR2009b |
Serial |
1182 |
|
Permanent link to this record |
|
|
|
|
Author |
Ignasi Rius; Jordi Gonzalez; Javier Varona; Xavier Roca |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Action-specific motion prior for efficient bayesian 3D human body tracking |
Type |
Journal Article |
|
Year |
2009 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
42 |
Issue |
11 |
Pages |
2907–2921 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we aim to reconstruct the 3D motion parameters of a human body
model from the known 2D positions of a reduced set of joints in the image plane.
Towards this end, an action-specific motion model is trained from a database of real
motion-captured performances. The learnt motion model is used within a particle
filtering framework as a priori knowledge on human motion. First, our dynamic
model guides the particles according to similar situations previously learnt. Then, the solution space is constrained so only feasible human postures are accepted as valid solutions at each time step. As a result, we are able to track the 3D configuration of the full human body from several cycles of walking motion sequences using only the 2D positions of a very reduced set of joints from lateral or frontal viewpoints. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ RGV2009 |
Serial |
1159 |
|
Permanent link to this record |
|
|
|
|
Author |
Palaiahnakote Shivakumara; Anjan Dutta; Chew Lim Tan; Umapada Pal |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
72 |
Issue |
1 |
Pages |
515-539 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1380-7501 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.077 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SDT2014 |
Serial |
2357 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question Answering |
Type |
Conference Article |
|
Year |
2017 |
Publication |
8th Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Visual Qestion Aswering; Convolutional Neural Networks; Long short-term memory networks |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address the problem of visual question answering by proposing a novel model, called VIBIKNet. Our model is based on integrating Kernelized Convolutional Neural Networks and Long-Short Term Memory units to generate an answer given a question about an image. We prove that VIBIKNet is an optimal trade-off between accuracy and computational load, in terms of memory and time consumption. We validate our method on the VQA challenge dataset and compare it to the top performing methods in order to illustrate its performance and speed. |
|
|
Address |
Faro; Portugal; June 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IbPRIA |
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ BPC2017 |
Serial |
2939 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; Josep Llados; Gemma Sanchez |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Symbol Spotting in Vectorized Technical Drawings Through a Lookup Table of Region Strings |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Pattern Analysis and Applications |
Abbreviated Journal |
PAA |
|
|
Volume |
13 |
Issue |
3 |
Pages |
321-331 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address the problem of symbol spotting in technical document images applied to scanned and vectorized line drawings. Like any information spotting architecture, our approach has two components. First, symbols are decomposed in primitives which are compactly represented and second a primitive indexing structure aims to efficiently retrieve similar primitives. Primitives are encoded in terms of attributed strings representing closed regions. Similar strings are clustered in a lookup table so that the set median strings act as indexing keys. A voting scheme formulates hypothesis in certain locations of the line drawing image where there is a high presence of regions similar to the queried ones, and therefore, a high probability to find the queried graphical symbol. The proposed approach is illustrated in a framework consisting in spotting furniture symbols in architectural drawings. It has been proved to work even in the presence of noise and distortion introduced by the scanning and raster-to-vector processes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1433-7541 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ RLS2010 |
Serial |
1165 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Monocular Depth-based Background Estimation |
Type |
Conference Article |
|
Year |
2012 |
Publication |
7th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
323-328 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences. |
|
|
Address |
Roma |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012b; ADAS @ adas @ cpl2012e |
Serial |
2012 |
|
Permanent link to this record |
|
|
|
|
Author |
Michael Holte; Bhaskar Chakraborty; Jordi Gonzalez; Thomas B. Moeslund |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
A Local 3D Motion Descriptor for Multi-View Human Action Recognition from 4D Spatio-Temporal Interest Points |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Journal of Selected Topics in Signal Processing |
Abbreviated Journal |
J-STSP |
|
|
Volume |
6 |
Issue |
5 |
Pages |
553-565 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address the problem of human action recognition in reconstructed 3-D data acquired by multi-camera systems. We contribute to this field by introducing a novel 3-D action recognition approach based on detection of 4-D (3-D space $+$ time) spatio-temporal interest points (STIPs) and local description of 3-D motion features. STIPs are detected in multi-view images and extended to 4-D using 3-D reconstructions of the actors and pixel-to-vertex correspondences of the multi-camera setup. Local 3-D motion descriptors, histogram of optical 3-D flow (HOF3D), are extracted from estimated 3-D optical flow in the neighborhood of each 4-D STIP and made view-invariant. The local HOF3D descriptors are divided using 3-D spatial pyramids to capture and improve the discrimination between arm- and leg-based actions. Based on these pyramids of HOF3D descriptors we build a bag-of-words (BoW) vocabulary of human actions, which is compressed and classified using agglomerative information bottleneck (AIB) and support vector machines (SVMs), respectively. Experiments on the publicly available i3DPost and IXMAS datasets show promising state-of-the-art results and validate the performance and view-invariance of the approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1932-4553 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ HCG2012 |
Serial |
1994 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Facial expression recognition using tracked facial actions: Classifier performance analysis |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Engineering Applications of Artificial Intelligence |
Abbreviated Journal |
EAAI |
|
|
Volume |
26 |
Issue |
1 |
Pages |
467-477 |
|
|
Keywords |
Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMR2013 |
Serial |
2185 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Rouhani; Angel Sappa |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
The Richer Representation the Better Registration |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
22 |
Issue |
12 |
Pages |
5036-5049 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, the registration problem is formulated as a point to model distance minimization. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, this formulation avoids the correspondence search that is time-consuming. In the first stage, the target set is described through an implicit function by employing a linear least squares fitting. This function can be either an implicit polynomial or an implicit B-spline from a coarse to fine representation. In the second stage, we show how the obtained implicit representation is used as an interface to convert point-to-point registration into point-to-implicit problem. Furthermore, we show that this registration distance is smooth and can be minimized through the Levengberg-Marquardt algorithm. All the formulations presented for both stages are compact and easy to implement. In addition, we show that our registration method can be handled using any implicit representation though some are coarse and others provide finer representations; hence, a tradeoff between speed and accuracy can be set by employing the right implicit function. Experimental results and comparisons in 2D and 3D show the robustness and the speed of convergence of the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RoS2013 |
Serial |
2665 |
|
Permanent link to this record |
|
|
|
|
Author |
Eduard Vazquez; Theo Gevers; M. Lucassen; Joost Van de Weijer; Ramon Baldrich |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Journal of the Optical Society of America A |
Abbreviated Journal |
JOSA A |
|
|
Volume |
27 |
Issue |
3 |
Pages |
613–621 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE;CIC |
Approved |
no |
|
|
Call Number |
CAT @ cat @ VGL2010 |
Serial |
1275 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Decremental generalized discriminative common vectors applied to images classification |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
KBS |
|
|
Volume |
131 |
Issue |
|
Pages |
46-57 |
|
|
Keywords |
Decremental learning; Generalized Discriminative Common Vectors; Feature extraction; Linear subspace methods; Classification |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, a novel decremental subspace-based learning method called Decremental Generalized Discriminative Common Vectors method (DGDCV) is presented. The method makes use of the concept of decremental learning, which we introduce in the field of supervised feature extraction and classification. By efficiently removing unnecessary data and/or classes for a knowledge base, our methodology is able to update the model without recalculating the full projection or accessing to the previously processed training data, while retaining the previously acquired knowledge. The proposed method has been validated in 6 standard face recognition datasets, showing a considerable computational gain without compromising the accuracy of the model. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMH2017a |
Serial |
3003 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Marçal Rusiñol; Aura Hernandez-Sabate |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Feature Extraction by Using Dual-Generalized Discriminative Common Vectors |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
61 |
Issue |
3 |
Pages |
331-351 |
|
|
Keywords |
Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.084; 600.118; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DRR2019 |
Serial |
3172 |
|
Permanent link to this record |
|
|
|
|
Author |
Razieh Rastgoo; Kourosh Kiani; Sergio Escalera |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Entropy |
Abbreviated Journal |
ENTROPY |
|
|
Volume |
20 |
Issue |
11 |
Pages |
809 |
|
|
Keywords |
hand sign language; deep learning; restricted Boltzmann machine (RBM); multi-modal; profoundly deaf; noisy image |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, a deep learning approach, Restricted Boltzmann Machine (RBM), is used to perform automatic hand sign language recognition from visual data. We evaluate how RBM, as a deep generative model, is capable of generating the distribution of the input data for an enhanced recognition of unseen data. Two modalities, RGB and Depth, are considered in the model input in three forms: original image, cropped image, and noisy cropped image. Five crops of the input image are used and the hand of these cropped images are detected using Convolutional Neural Network (CNN). After that, three types of the detected hand images are generated for each modality and input to RBMs. The outputs of the RBMs for two modalities are fused in another RBM in order to recognize the output sign label of the input image. The proposed multi-modal model is trained on all and part of the American alphabet and digits of four publicly available datasets. We also evaluate the robustness of the proposal against noise. Experimental results show that the proposed multi-modal model, using crops and the RBM fusing methodology, achieves state-of-the-art results on Massey University Gesture Dataset 2012, American Sign Language (ASL). and Fingerspelling Dataset from the University of Surrey’s Center for Vision, Speech and Signal Processing, NYU, and ASL Fingerspelling A datasets. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ RKE2018 |
Serial |
3198 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Debora Gil |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Continuous head pose estimation using manifold subspace embedding and multivariate regression |
Type |
Journal Article |
|
Year |
2018 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
|
|
Volume |
6 |
Issue |
|
Pages |
18325 - 18334 |
|
|
Keywords |
Head Pose estimation; HOG features; Generalized Discriminative Common Vectors; B-splines; Multiple linear regression |
|
|
Abstract ![sorted by Abstract field, descending order (down)](img/sort_desc.gif) |
In this paper, a continuous head pose estimation system is proposed to estimate yaw and pitch head angles from raw facial images. Our approach is based on manifold learningbased methods, due to their promising generalization properties shown for face modelling from images. The method combines histograms of oriented gradients, generalized discriminative common vectors and continuous local regression to achieve successful performance. Our proposal was tested on multiple standard face datasets, as well as in a realistic scenario. Results show a considerable performance improvement and a higher consistence of our model in comparison with other state-of-art methods, with angular errors varying between 9 and 17 degrees. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2169-3536 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMH2018b |
Serial |
3091 |
|
Permanent link to this record |