Home | [181–190] << 191 192 193 194 195 196 197 198 199 200 >> [201–210] |
Records | |||||
---|---|---|---|---|---|
Author | Sounak Dey; Anjan Dutta; Josep Llados; Alicia Fornes; Umapada Pal | ||||
Title | Shallow Neural Network Model for Hand-drawn Symbol Recognition in Multi-Writer Scenario | Type | Conference Article | ||
Year | 2017 | Publication | 12th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 31-32 | ||
Keywords | |||||
Abstract | One of the main challenges in hand drawn symbol recognition is the variability among symbols because of the different writer styles. In this paper, we present and discuss some results recognizing hand-drawn symbols with a shallow neural network. A neural network model inspired from the LeNet architecture has been used to achieve state-of-the-art results with
very less training data, which is very unlikely to the data hungry deep neural network. From the results, it has become evident that the neural network architectures can efficiently describe and recognize hand drawn symbols from different writers and can model the inter author aberration |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DDL2017 | Serial | 3057 | ||
Permanent link to this record | |||||
Author | Pau Riba; Anjan Dutta; Josep Llados; Alicia Fornes | ||||
Title | Graph-based deep learning for graphics classification | Type | Conference Article | ||
Year | 2017 | Publication | 12th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 29-30 | ||
Keywords | |||||
Abstract | Graph-based representations are a common way to deal with graphics recognition problems. However, previous works were mainly focused on developing learning-free techniques. The success of deep learning frameworks have proved that learning is a powerful tool to solve many problems, however it is not straightforward to extend these methodologies to non euclidean data such as graphs. On the other hand, graphs are a good representational structure for graphical entities. In this work, we present some deep learning techniques that have been proposed in the literature for graph-based representations and
we show how they can be used in graphics recognition problems |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG; 600.097; 601.302; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RDL2017b | Serial | 3058 | ||
Permanent link to this record | |||||
Author | Adria Rico; Alicia Fornes | ||||
Title | Camera-based Optical Music Recognition using a Convolutional Neural Network | Type | Conference Article | ||
Year | 2017 | Publication | 12th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 27-28 | ||
Keywords | optical music recognition; document analysis; convolutional neural network; deep learning | ||||
Abstract | Optical Music Recognition (OMR) consists in recognizing images of music scores. Contrary to expectation, the current OMR systems usually fail when recognizing images of scores captured by digital cameras and smartphones. In this work, we propose a camera-based OMR system based on Convolutional Neural Networks, showing promising preliminary results | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG;600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RiF2017 | Serial | 3059 | ||
Permanent link to this record | |||||
Author | Oriol Vicente; Alicia Fornes; Ramon Valdes | ||||
Title | La Xarxa d Humanitats Digitals de la UABCie: una estructura inteligente para la investigación y la transferencia en Humanidades | Type | Conference Article | ||
Year | 2017 | Publication | 3rd Congreso Internacional de Humanidades Digitales Hispánicas. Sociedad Internacional | Abbreviated Journal | |
Volume | Issue | Pages | 281-383 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-697-5692-8 | Medium | ||
Area | Expedition | Conference | HDH | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ VFV2017 | Serial | 3060 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Beata Megyesi; Joan Mas | ||||
Title | Transcription of Encoded Manuscripts with Image Processing Techniques | Type | Conference Article | ||
Year | 2017 | Publication | Digital Humanities Conference | Abbreviated Journal | |
Volume | Issue | Pages | 441-443 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DH | ||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ FMM2017 | Serial | 3061 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri | ||||
Title | Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction | Type | Journal Article | ||
Year | 2018 | Publication | Journal of Mathematical Imaging and Vision | Abbreviated Journal | JMIV |
Volume | 60 | Issue | 4 | Pages | 512-524 |
Keywords | |||||
Abstract | This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DMH2018a | Serial | 3062 | ||
Permanent link to this record | |||||
Author | Albert Clapes; Tinne Tuytelaars; Sergio Escalera | ||||
Title | Darwintrees for action recognition | Type | Conference Article | ||
Year | 2017 | Publication | Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ CTE2017 | Serial | 3069 | ||
Permanent link to this record | |||||
Author | Huamin Ren; Nattiya Kanhabua; Andreas Mogelmose; Weifeng Liu; Kaustubh Kulkarni; Sergio Escalera; Xavier Baro; Thomas B. Moeslund | ||||
Title | Back-dropout Transfer Learning for Action Recognition | Type | Journal Article | ||
Year | 2018 | Publication | IET Computer Vision | Abbreviated Journal | IETCV |
Volume | 12 | Issue | 4 | Pages | 484-491 |
Keywords | Learning (artificial intelligence); Pattern Recognition | ||||
Abstract | Transfer learning aims at adapting a model learned from source dataset to target dataset. It is a beneficial approach especially when annotating on the target dataset is expensive or infeasible. Transfer learning has demonstrated its powerful learning capabilities in various vision tasks. Despite transfer learning being a promising approach, it is still an open question how to adapt the model learned from the source dataset to the target dataset. One big challenge is to prevent the impact of category bias on classification performance. Dataset bias exists when two images from the same category, but from different datasets, are not classified as the same. To address this problem, a transfer learning algorithm has been proposed, called negative back-dropout transfer learning (NB-TL), which utilizes images that have been misclassified and further performs back-dropout strategy on them to penalize errors. Experimental results demonstrate the effectiveness of the proposed algorithm. In particular, the authors evaluate the performance of the proposed NB-TL algorithm on UCF 101 action recognition dataset, achieving 88.9% recognition rate. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ RKM2018 | Serial | 3071 | ||
Permanent link to this record | |||||
Author | Mark Philip Philipsen; Jacob Velling Dueholm; Anders Jorgensen; Sergio Escalera; Thomas B. Moeslund | ||||
Title | Organ Segmentation in Poultry Viscera Using RGB-D | Type | Journal Article | ||
Year | 2018 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 18 | Issue | 1 | Pages | 117 |
Keywords | semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN | ||||
Abstract | We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ PVJ2018 | Serial | 3072 | ||
Permanent link to this record | |||||
Author | Masakazu Iwamura; Naoyuki Morimoto; Keishi Tainaka; Dena Bazazian; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | ICDAR2017 Robust Reading Challenge on Omnidirectional Video | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Results of ICDAR 2017 Robust Reading Challenge on Omnidirectional Video are presented. This competition uses Downtown Osaka Scene Text (DOST) Dataset that was captured in Osaka, Japan with an omnidirectional camera. Hence, it consists of sequential images (videos) of different view angles. Regarding the sequential images as videos (video mode), two tasks of localisation and end-to-end recognition are prepared. Regarding them as a set of still images (still image mode), three tasks of localisation, cropped word recognition and end-to-end recognition are prepared. As the dataset has been captured in Japan, the dataset contains Japanese text but also include text consisting of alphanumeric characters (Latin text). Hence, a submitted result for each task is evaluated in three ways: using Japanese only ground truth (GT), using Latin only GT and using combined GTs of both. Finally, by the submission deadline, we have received two submissions in the text localisation task of the still image mode. We intend to continue the competition in the open mode. Expecting further submissions, in this report we provide baseline results in all the tasks in addition to the submissions from the community. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.084; 600.121 | Approved | no | ||
Call Number | Admin @ si @ IMT2017 | Serial | 3077 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Claudio Rossi; Harald Skinnemoen | ||||
Title | River segmentation for flood monitoring | Type | Conference Article | ||
Year | 2017 | Publication | Data Science for Emergency Management at Big Data 2017 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.084; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LRS2017 | Serial | 3078 | ||
Permanent link to this record | |||||
Author | Suman Ghosh; Ernest Valveny | ||||
Title | R-PHOC: Segmentation-Free Word Spotting using CNN | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Convolutional neural network; Image segmentation; Artificial neural network; Nearest neighbor search | ||||
Abstract | arXiv:1707.01294
This paper proposes a region based convolutional neural network for segmentation-free word spotting. Our network takes as input an image and a set of word candidate bound- ing boxes and embeds all bounding boxes into an embedding space, where word spotting can be casted as a simple nearest neighbour search between the query representation and each of the candidate bounding boxes. We make use of PHOC embedding as it has previously achieved significant success in segmentation- based word spotting. Word candidates are generated using a simple procedure based on grouping connected components using some spatial constraints. Experiments show that R-PHOC which operates on images directly can improve the current state-of- the-art in the standard GW dataset and performs as good as PHOCNET in some cases designed for segmentation based word spotting. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ GhV2017a | Serial | 3079 | ||
Permanent link to this record | |||||
Author | Suman Ghosh; Ernest Valveny | ||||
Title | Visual attention models for scene text recognition | Type | Conference Article | ||
Year | 2017 | Publication | 14th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | arXiv:1706.01487
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ GhV2017b | Serial | 3080 | ||
Permanent link to this record | |||||
Author | Sounak Dey; Anjan Dutta; Juan Ignacio Toledo; Suman Ghosh; Josep Llados; Umapada Pal | ||||
Title | SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification | Type | Miscellaneous | ||
Year | 2018 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Offline signature verification is one of the most challenging tasks in biometrics and document forensics. Unlike other verification problems, it needs to model minute but critical details between genuine and forged signatures, because a skilled falsification might often resembles the real signature with small deformation. This verification task is even harder in writer independent scenarios which is undeniably fiscal for realistic cases. In this paper, we model an offline writer independent signature verification task with a convolutional Siamese network. Siamese networks are twin networks with shared weights, which can be trained to learn a feature space where similar observations are placed in proximity. This is achieved by exposing the network to a pair of similar and dissimilar observations and minimizing the Euclidean distance between similar pairs while simultaneously maximizing it between dissimilar pairs. Experiments conducted on cross-domain datasets emphasize the capability of our network to model forgery in different languages (scripts) and handwriting styles. Moreover, our designed Siamese network, named SigNet, exceeds the state-of-the-art results on most of the benchmark signature datasets, which paves the way for further research in this direction. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DDT2018 | Serial | 3085 | ||
Permanent link to this record | |||||
Author | Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados | ||||
Title | Ontology-Based Understanding of Architectural Drawings | Type | Book Chapter | ||
Year | 2017 | Publication | International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges | Abbreviated Journal | |
Volume | 9657 | Issue | Pages | 75-85 | |
Keywords | Graphics recognition; Floor plan analysi; Domain ontology | ||||
Abstract | In this paper we present a knowledge base of architectural documents aiming at improving existing methods of floor plan classification and understanding. It consists of an ontological definition of the domain and the inclusion of real instances coming from both, automatically interpreted and manually labeled documents. The knowledge base has proven to be an effective tool to structure our knowledge and to easily maintain and upgrade it. Moreover, it is an appropriate means to automatically check the consistency of relational data and a convenient complement of hard-coded knowledge interpretation systems. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ HRL2017 | Serial | 3086 | ||
Permanent link to this record |