|
Albert Clapes, Tinne Tuytelaars, & Sergio Escalera. (2017). Darwintrees for action recognition. In Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV.
|
|
|
Raul Gomez, Baoguang Shi, Lluis Gomez, Lukas Numann, Andreas Veit, Jiri Matas, et al. (2017). ICDAR2017 Robust Reading Challenge on COCO-Text. In 14th International Conference on Document Analysis and Recognition.
|
|
|
Masakazu Iwamura, Naoyuki Morimoto, Keishi Tainaka, Dena Bazazian, Lluis Gomez, & Dimosthenis Karatzas. (2017). ICDAR2017 Robust Reading Challenge on Omnidirectional Video. In 14th International Conference on Document Analysis and Recognition.
Abstract: Results of ICDAR 2017 Robust Reading Challenge on Omnidirectional Video are presented. This competition uses Downtown Osaka Scene Text (DOST) Dataset that was captured in Osaka, Japan with an omnidirectional camera. Hence, it consists of sequential images (videos) of different view angles. Regarding the sequential images as videos (video mode), two tasks of localisation and end-to-end recognition are prepared. Regarding them as a set of still images (still image mode), three tasks of localisation, cropped word recognition and end-to-end recognition are prepared. As the dataset has been captured in Japan, the dataset contains Japanese text but also include text consisting of alphanumeric characters (Latin text). Hence, a submitted result for each task is evaluated in three ways: using Japanese only ground truth (GT), using Latin only GT and using combined GTs of both. Finally, by the submission deadline, we have received two submissions in the text localisation task of the still image mode. We intend to continue the competition in the open mode. Expecting further submissions, in this report we provide baseline results in all the tasks in addition to the submissions from the community.
|
|
|
Laura Lopez-Fuentes, Claudio Rossi, & Harald Skinnemoen. (2017). River segmentation for flood monitoring. In Data Science for Emergency Management at Big Data 2017.
Abstract: Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation.
|
|
|
Suman Ghosh, & Ernest Valveny. (2017). R-PHOC: Segmentation-Free Word Spotting using CNN. In 14th International Conference on Document Analysis and Recognition.
Abstract: arXiv:1707.01294
This paper proposes a region based convolutional neural network for segmentation-free word spotting. Our network takes as input an image and a set of word candidate bound- ing boxes and embeds all bounding boxes into an embedding space, where word spotting can be casted as a simple nearest neighbour search between the query representation and each of the candidate bounding boxes. We make use of PHOC embedding as it has previously achieved significant success in segmentation- based word spotting. Word candidates are generated using a simple procedure based on grouping connected components using some spatial constraints. Experiments show that R-PHOC which operates on images directly can improve the current state-of- the-art in the standard GW dataset and performs as good as PHOCNET in some cases designed for segmentation based word spotting.
Keywords: Convolutional neural network; Image segmentation; Artificial neural network; Nearest neighbor search
|
|
|
Suman Ghosh, & Ernest Valveny. (2017). Visual attention models for scene text recognition. In 14th International Conference on Document Analysis and Recognition.
Abstract: arXiv:1706.01487
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
|
|
|
Konstantia Georgouli, Katerine Diaz, Jesus Martinez del Rincon, & Anastasios Koidis. (2017). Building generic, easily-updatable chemometric models with harmonisation and augmentation features: The case of FTIR vegetable oils classification. In 3rd Ιnternational Conference Metrology Promoting Standardization and Harmonization in Food and Nutrition.
|
|
|
Albert Berenguel, Oriol Ramos Terrades, Josep Llados, & Cristina Cañero. (2017). Evaluation of Texture Descriptors for Validation of Counterfeit Documents. In 14th International Conference on Document Analysis and Recognition (pp. 1237–1242).
Abstract: This paper describes an exhaustive comparative analysis and evaluation of different existing texture descriptor algorithms to differentiate between genuine and counterfeit documents. We include in our experiments different categories of algorithms and compare them in different scenarios with several counterfeit datasets, comprising banknotes and identity documents. Computational time in the extraction of each descriptor is important because the final objective is to use it in a real industrial scenario. HoG and CNN based descriptors stands out statistically over the rest in terms of the F1-score/time ratio performance.
|
|
|
ChunYang, Xu Cheng Yin, Hong Yu, Dimosthenis Karatzas, & Yu Cao. (2017). ICDAR2017 Robust Reading Challenge on Text Extraction from Biomedical Literature Figures (DeTEXT). In 14th International Conference on Document Analysis and Recognition (pp. 1444–1447).
Abstract: Hundreds of millions of figures are available in the biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information and understanding biomedical documents. Unlike images in the open domain, biomedical figures present a variety of unique challenges. For example, biomedical figures typically have complex layouts, small font sizes, short text, specific text, complex symbols and irregular text arrangements. This paper presents the final results of the ICDAR 2017 Competition on Text Extraction from Biomedical Literature Figures (ICDAR2017 DeTEXT Competition), which aims at extracting (detecting and recognizing) text from biomedical literature figures. Similar to text extraction from scene images and web pictures, ICDAR2017 DeTEXT Competition includes three major tasks, i.e., text detection, cropped word recognition and end-to-end text recognition. Here, we describe in detail the data set, tasks, evaluation protocols and participants of this competition, and report the performance of the participating methods.
|
|
|
Bojana Gajic, Eduard Vazquez, & Ramon Baldrich. (2017). Evaluation of Deep Image Descriptors for Texture Retrieval. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) (pp. 251–257).
Abstract: The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures.
Keywords: Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation
|
|
|
Mireia Sole, Joan Blanco, Debora Gil, G. Fonseka, Richard Frodsham, Francesca Vidal, et al. (2017). Noves perspectives en l estudi de la territorialitat cromosomica de cel·lules germinals masculines: estudis tridimensionals. JBR - Biologia de la Reproduccio, 73–78.
Abstract: In somatic cells, chromosomes occupy specific nuclear regions called chromosome territories which are involved in the
maintenance and regulation of the genome. Preliminary data in male germ cells also suggest the importance of chromosome
territoriality in cell functionality. Nevertheless, the specific characteristics of testicular tissue (presence of different
cell types with different morphological characteristics, in different stages of development and with different ploidy)
makes difficult to achieve conclusive results. In this study we have developed a methodology to approach the threedimensional
study of all chromosome territories in male germ cells from C57BL/6J mice (Mus musculus). The method
includes the following steps: i) Optimized cell fixation to obtain an optimal preservation of the three-dimensionality cell
morphology, ii) Chromosome identification by FISH (Chromoprobe Multiprobe® OctoChrome™ Murine System; Cytocell)
and confocal microscopy (TCS-SP5, Leica Microsystems), iii) Cell type identification by immunofluorescence
iv) Image analysis using Matlab scripts, v) Numerical data extraction related to chromosome features, chromosome
radial position and chromosome relative position. This methodology allows the unequivocally identification and the
analysis of the chromosome territories of all spermatogenic stages. Results will provide information about the features
that determine chromosomal position, preferred associations between chromosomes, and the relationship between chromosome
positioning and genome regulation.
|
|
|
Jordi Esquirol, Cristina Palmero, Vanessa Bayo, Miquel Angel Cos, Sergio Escalera, David Sanchez, et al. (2017). Automatic RBG-depth-pressure anthropometric analysis and individualised sleep solution prescription. JMET - Journal of Medical Engineering & Technology, 486–497.
Abstract: INTRODUCTION:
Sleep surfaces must adapt to individual somatotypic features to maintain a comfortable, convenient and healthy sleep, preventing diseases and injuries. Individually determining the most adequate rest surface can often be a complex and subjective question.
OBJECTIVES:
To design and validate an automatic multimodal somatotype determination model to automatically recommend an individually designed mattress-topper-pillow combination.
METHODS:
Design and validation of an automated prescription model for an individualised sleep system is performed through a single-image 2 D-3 D analysis and body pressure distribution, to objectively determine optimal individual sleep surfaces combining five different mattress densities, three different toppers and three cervical pillows.
RESULTS:
A final study (n = 151) and re-analysis (n = 117) defined and validated the model, showing high correlations between calculated and real data (>85% in height and body circumferences, 89.9% in weight, 80.4% in body mass index and more than 70% in morphotype categorisation).
CONCLUSIONS:
Somatotype determination model can accurately prescribe an individualised sleep solution. This can be useful for healthy people and for health centres that need to adapt sleep surfaces to people with special needs. Next steps will increase model's accuracy and analise, if this prescribed individualised sleep solution can improve sleep quantity and quality; additionally, future studies will adapt the model to mattresses with technological improvements, tailor-made production and will define interfaces for people with special needs.
|
|
|
Mireia Sole, Joan Blanco, Debora Gil, G. Fonseka, Richard Frodsham, Oliver Valero, et al. (2017). Análisis 3d de la territorialidad cromosómica en células espermatogénicas: explorando la infertilidad desde un nuevo prisma. ASEBIR - Revista Asociación para el Estudio de la Biología de la Reproducción, 105.
|
|
|
Marc Bolaños, Mariella Dimiccoli, & Petia Radeva. (2017). Towards Storytelling from Visual Lifelogging: An Overview. THMS - IEEE Transactions on Human-Machine Systems, 47(1), 77–90.
Abstract: Visual lifelogging consists of acquiring images that capture the daily experiences of the user by wearing a camera over a long period of time. The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare, security, leisure and
the quantified self. However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges. This paper provides a thorough review of advances made so far in egocentric data analysis, and in view of the current state of the art, indicates new lines of research to move us towards storytelling from visual lifelogging.
|
|
|
Mariella Dimiccoli, Marc Bolaños, Estefania Talavera, Maedeh Aghaei, Stavri G. Nikolov, & Petia Radeva. (2017). SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation. CVIU - Computer Vision and Image Understanding, 155, 55–69.
Abstract: While wearable cameras are becoming increasingly popular, locating relevant information in large unstructured collections of egocentric images is still a tedious and time consuming processes. This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments. First, contextual and semantic information is extracted for each image by employing a Convolutional Neural Networks approach. Later, by integrating language processing, a vocabulary of concepts is defined in a semantic space. Finally, by exploiting the temporal coherence in photo streams, images which share contextual and semantic attributes are grouped together. The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization. Experiments over egocentric sets of nearly 17,000 images, show that the proposed approach outperforms state-of-the-art methods.
|
|