Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Francesca Vidal; Zaida Sarrate | ||||
Title | Noves perspectives en l estudi de la territorialitat cromosomica de cel·lules germinals masculines: estudis tridimensionals | Type | Journal | ||
Year | 2017 | Publication | Biologia de la Reproduccio | Abbreviated Journal | JBR |
Volume | 15 | Issue | Pages | 73-78 | |
Keywords | |||||
Abstract | In somatic cells, chromosomes occupy specific nuclear regions called chromosome territories which are involved in the
maintenance and regulation of the genome. Preliminary data in male germ cells also suggest the importance of chromosome territoriality in cell functionality. Nevertheless, the specific characteristics of testicular tissue (presence of different cell types with different morphological characteristics, in different stages of development and with different ploidy) makes difficult to achieve conclusive results. In this study we have developed a methodology to approach the threedimensional study of all chromosome territories in male germ cells from C57BL/6J mice (Mus musculus). The method includes the following steps: i) Optimized cell fixation to obtain an optimal preservation of the three-dimensionality cell morphology, ii) Chromosome identification by FISH (Chromoprobe Multiprobe® OctoChrome™ Murine System; Cytocell) and confocal microscopy (TCS-SP5, Leica Microsystems), iii) Cell type identification by immunofluorescence iv) Image analysis using Matlab scripts, v) Numerical data extraction related to chromosome features, chromosome radial position and chromosome relative position. This methodology allows the unequivocally identification and the analysis of the chromosome territories of all spermatogenic stages. Results will provide information about the features that determine chromosomal position, preferred associations between chromosomes, and the relationship between chromosome positioning and genome regulation. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-697-3767-5 | Medium | ||
Area | Expedition | Conference | |||
Notes | IAM; 600.096; 600.145 | Approved | no | ||
Call Number | Admin @ si @ SBG2017c | Serial | 2961 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Sebastia Massanet; Manuel Gonzalez-Hidalgo | ||||
Title | Image vignetting reduction via a maximization of fuzzy entropy | Type | Conference Article | ||
Year | 2017 | Publication | IEEE International Conference on Fuzzy Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In many computer vision applications, vignetting is an undesirable effect which must be removed in a pre-processing step. Recently, an algorithm for image vignetting correction has been presented by means of a minimization of log-intensity entropy. This method relies on an increase of the entropy of the image when it is affected with vignetting. In this paper, we propose a novel algorithm to reduce image vignetting via a maximization of the fuzzy entropy of the image. Fuzzy entropy quantifies the fuzziness degree of a fuzzy set and its value is also modified by the presence of vignetting. The experimental results show that this novel algorithm outperforms in most cases the algorithm based on the minimization of log-intensity entropy both from the qualitative and the quantitative point of view. | ||||
Address | Napoles; Italia; July 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FUZZ-IEEE | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LMG2017 | Serial | 2972 | ||
Permanent link to this record | |||||
Author | Angel Valencia; Roger Idrovo; Angel Sappa; Douglas Plaza; Daniel Ochoa | ||||
Title | A 3D Vision Based Approach for Optimal Grasp of Vacuum Grippers | Type | Conference Article | ||
Year | 2017 | Publication | IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In general, robot grasping approaches are based on the usage of multi-finger grippers. However, when large size objects need to be manipulated vacuum grippers are preferred, instead of finger based grippers. This paper aims to estimate the best picking place for a two suction cups vacuum gripper,
when planar objects with an unknown size and geometry are considered. The approach is based on the estimation of geometric properties of object’s shape from a partial cloud of points (a single 3D view), in such a way that combine with considerations of a theoretical model to generate an optimal contact point that minimizes the vacuum force needed to guarantee a grasp. Experimental results in real scenarios are presented to show the validity of the proposed approach. |
||||
Address | San Sebastian; Spain; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECMSM | ||
Notes | ADAS; 600.086; 600.118 | Approved | no | ||
Call Number | Admin @ si @ VIS2017 | Serial | 2917 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol; Ricardo Toledo | ||||
Title | Automatic Static/Variable Content Separation in Administrative Document Images | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper we present an automatic method for separating static and variable content from administrative document images. An alignment approach is able to unsupervisedly build probabilistic templates from a set of examples of the same document kind. Such templates define which is the likelihood of every pixel of being either static or variable content. In the extraction step, the same alignment technique is used to match
an incoming image with the template and to locate the positions where variable fields appear. We validate our approach on the public NIST Structured Tax Forms Dataset. |
||||
Address | Kyoto; Japan; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.084; 600.121 | Approved | no | ||
Call Number | Admin @ si @ ART2017 | Serial | 3001 | ||
Permanent link to this record | |||||
Author | Aitor Alvarez-Gila; Joost Van de Weijer; Estibaliz Garrote | ||||
Title | Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB | Type | Conference Article | ||
Year | 2017 | Publication | 1st International Workshop on Physics Based Vision meets Deep Learning | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However, most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset. |
||||
Address | Venice; Italy; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV-PBDL | ||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ AWG2017 | Serial | 2969 | ||
Permanent link to this record | |||||
Author | ChunYang; Xu Cheng Yin; Hong Yu; Dimosthenis Karatzas; Yu Cao | ||||
Title | ICDAR2017 Robust Reading Challenge on Text Extraction from Biomedical Literature Figures (DeTEXT) | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1444-1447 | ||
Keywords | |||||
Abstract | Hundreds of millions of figures are available in the biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information and understanding biomedical documents. Unlike images in the open domain, biomedical figures present a variety of unique challenges. For example, biomedical figures typically have complex layouts, small font sizes, short text, specific text, complex symbols and irregular text arrangements. This paper presents the final results of the ICDAR 2017 Competition on Text Extraction from Biomedical Literature Figures (ICDAR2017 DeTEXT Competition), which aims at extracting (detecting and recognizing) text from biomedical literature figures. Similar to text extraction from scene images and web pictures, ICDAR2017 DeTEXT Competition includes three major tasks, i.e., text detection, cropped word recognition and end-to-end text recognition. Here, we describe in detail the data set, tasks, evaluation protocols and participants of this competition, and report the performance of the participating methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-5386-3586-5 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ YCY2017 | Serial | 3098 | ||
Permanent link to this record | |||||
Author | Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez | ||||
Title | Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology | Type | Conference Article | ||
Year | 2017 | Publication | 8th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 10255 | Issue | Pages | 287-294 | |
Keywords | Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model | ||||
Abstract | Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach. | ||||
Address | Faro; Portugal; June 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | L.A. Alexandre; J.Salvador Sanchez; Joao M. F. Rodriguez | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-319-58837-7 | Medium | ||
Area | Expedition | Conference | IbPRIA | ||
Notes | DAG; 602.006; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RFV2017 | Serial | 2952 | ||
Permanent link to this record | |||||
Author | Juan Ignacio Toledo; Sounak Dey; Alicia Fornes; Josep Llados | ||||
Title | Handwriting Recognition by Attribute embedding and Recurrent Neural Networks | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1038-1043 | ||
Keywords | |||||
Abstract | Handwriting recognition consists in obtaining the transcription of a text image. Recent word spotting methods based on attribute embedding have shown good performance when recognizing words. However, they are holistic methods in the sense that they recognize the word as a whole (i.e. they find the closest word in the lexicon to the word image). Consequently,
these kinds of approaches are not able to deal with out of vocabulary words, which are common in historical manuscripts. Also, they cannot be extended to recognize text lines. In order to address these issues, in this paper we propose a handwriting recognition method that adapts the attribute embedding to sequence learning. Concretely, the method learns the attribute embedding of patches of word images with a convolutional neural network. Then, these embeddings are presented as a sequence to a recurrent neural network that produces the transcription. We obtain promising results even without the use of any kind of dictionary or language model |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.097; 601.225; 600.121 | Approved | no | ||
Call Number | Admin @ si @ TDF2017 | Serial | 3055 | ||
Permanent link to this record | |||||
Author | Pau Riba; Josep Llados; Alicia Fornes; Anjan Dutta | ||||
Title | Large-scale graph indexing using binary embeddings of node contexts for information spotting in document image databases | Type | Journal Article | ||
Year | 2017 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 87 | Issue | Pages | 203-211 | |
Keywords | |||||
Abstract | Graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their representational power in front of classical appearance-based representations. However, retrieving a query graph from a large dataset of graphs implies a high computational complexity. The most important property for a large-scale retrieval is the search time complexity to be sub-linear in the number of database examples. With this aim, in this paper we propose a graph indexation formalism applied to visual retrieval. A binary embedding is defined as hashing keys for graph nodes. Given a database of labeled graphs, graph nodes are complemented with vectors of attributes representing their local context. Then, each attribute vector is converted to a binary code applying a binary-valued hash function. Therefore, graph retrieval is formulated in terms of finding target graphs in the database whose nodes have a small Hamming distance from the query nodes, easily computed with bitwise logical operators. As an application example, we validate the performance of the proposed methods in different real scenarios such as handwritten word spotting in images of historical documents or symbol spotting in architectural floor plans. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 602.006; 603.053; 600.121 | Approved | no | ||
Call Number | RLF2017b | Serial | 2873 | ||
Permanent link to this record | |||||
Author | Pau Riba; Anjan Dutta; Josep Llados; Alicia Fornes | ||||
Title | Graph-based deep learning for graphics classification | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 29-30 | ||
Keywords | |||||
Abstract | Graph-based representations are a common way to deal with graphics recognition problems. However, previous works were mainly focused on developing learning-free techniques. The success of deep learning frameworks have proved that learning is a powerful tool to solve many problems, however it is not straightforward to extend these methodologies to non euclidean data such as graphs. On the other hand, graphs are a good representational structure for graphical entities. In this work, we present some deep learning techniques that have been proposed in the literature for graph-based representations and
we show how they can be used in graphics recognition problems |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.097; 601.302; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RDL2017b | Serial | 3058 | ||
Permanent link to this record | |||||
Author | Pau Riba; Josep Llados; Alicia Fornes | ||||
Title | Error-tolerant coarse-to-fine matching model for hierarchical graphs | Type | Conference Article | ||
Year | 2017 | Publication | 11th IAPR-TC-15 International Workshop on Graph-Based Representations in Pattern Recognition | Abbreviated Journal | |
Volume | 10310 | Issue | Pages | 107-117 | |
Keywords | Graph matching; Hierarchical graph; Graph-based representation; Coarse-to-fine matching | ||||
Abstract | Graph-based representations are effective tools to capture structural information from visual elements. However, retrieving a query graph from a large database of graphs implies a high computational complexity. Moreover, these representations are very sensitive to noise or small changes. In this work, a novel hierarchical graph representation is designed. Using graph clustering techniques adapted from graph-based social media analysis, we propose to generate a hierarchy able to deal with different levels of abstraction while keeping information about the topology. For the proposed representations, a coarse-to-fine matching method is defined. These approaches are validated using real scenarios such as classification of colour images and handwritten word spotting. | ||||
Address | Anacapri; Italy; May 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | Pasquale Foggia; Cheng-Lin Liu; Mario Vento | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GbRPR | ||
Notes | DAG; 600.097; 601.302; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RLF2017a | Serial | 2951 | ||
Permanent link to this record | |||||
Author | Luis Herranz; Shuqiang Jiang; Ruihan Xu | ||||
Title | Modeling Restaurant Context for Food Recognition | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume | 19 | Issue | 2 | Pages | 430 - 440 |
Keywords | |||||
Abstract | Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ HJX2017 | Serial | 2965 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Claudio Rossi; Harald Skinnemoen | ||||
Title | River segmentation for flood monitoring | Type | Conference Article | ||
Year | 2017 | Publication | Data Science for Emergency Management at Big Data 2017 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.084; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LRS2017 | Serial | 3078 | ||
Permanent link to this record | |||||
Author | Pau Rodriguez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez | ||||
Title | Age and gender recognition in the wild with deep attention | Type | Journal Article | ||
Year | 2017 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 72 | Issue | Pages | 563-571 | |
Keywords | Age recognition; Gender recognition; Deep neural networks; Attention mechanisms | ||||
Abstract | Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show that including attention mechanisms enhances the performance of CNNs in terms of robustness and accuracy. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.098; 602.133; 600.119 | Approved | no | ||
Call Number | Admin @ si @ RCG2017b | Serial | 2962 | ||
Permanent link to this record | |||||
Author | Onur Ferhat | ||||
Title | Analysis of Head-Pose Invariant, Natural Light Gaze Estimation Methods | Type | Book Whole | ||
Year | 2017 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Eye tracker devices have traditionally been only used inside laboratories, requiring trained professionals and elaborate setup mechanisms. However, in the recent years the scientific work on easier–to–use eye trackers which require no special hardware—other than the omnipresent front facing cameras in computers, tablets, and mobiles—is aiming at making this technology common–place. These types of trackers have several extra challenges that make the problem harder, such as low resolution images provided by a regular webcam, the changing ambient lighting conditions, personal appearance differences, changes in head pose, and so on. Recent research in the field has focused on all these challenges in order to provide better gaze estimation performances in a real world setup.
In this work, we aim at tackling the gaze tracking problem in a single camera setup. We first analyze all the previous work in the field, identifying the strengths and weaknesses of each tried idea. We start our work on the gaze tracker with an appearance–based gaze estimation method, which is the simplest idea that creates a direct mapping between a rectangular image patch extracted around the eye in a camera image, and the gaze point (or gaze direction). Here, we do an extensive analysis of the factors that affect the performance of this tracker in several experimental setups, in order to address these problems in future works. In the second part of our work, we propose a feature–based gaze estimation method, which encodes the eye region image into a compact representation. We argue that this type of representation is better suited to dealing with head pose and lighting condition changes, as it both reduces the dimensionality of the input (i.e. eye image) and breaks the direct connection between image pixel intensities and the gaze estimation. Lastly, we use a face alignment algorithm to have robust face pose estimation, using a 3D model customized to the subject using the tracker. We combine this with a convolutional neural network trained on a large dataset of images to build a face pose invariant gaze tracker. |
||||
Address | September 2017 | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Fernando Vilariño | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-945373-5-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | MV | Approved | no | ||
Call Number | Admin @ si @ Fer2017 | Serial | 3018 | ||
Permanent link to this record |