|
Mohammad Ali Bagheri, Qigang Gao, Sergio Escalera, Huamin Ren, Thomas B. Moeslund, & Elham Etemad. (2017). Locality Regularized Group Sparse Coding for Action Recognition. CVIU - Computer Vision and Image Understanding, 158, 106–114.
Abstract: Bag of visual words (BoVW) models are widely utilized in image/ video representation and recognition. The cornerstone of these models is the encoding stage, in which local features are decomposed over a codebook in order to obtain a representation of features. In this paper, we propose a new encoding algorithm by jointly encoding the set of local descriptors of each sample and considering the locality structure of descriptors. The proposed method takes advantages of locality coding such as its stability and robustness to noise in descriptors, as well as the strengths of the group coding strategy by taking into account the potential relation among descriptors of a sample. To efficiently implement our proposed method, we consider the Alternating Direction Method of Multipliers (ADMM) framework, which results in quadratic complexity in the problem size. The method is employed for a challenging classification problem: action recognition by depth cameras. Experimental results demonstrate the outperformance of our methodology compared to the state-of-the-art on the considered datasets.
Keywords: Bag of words; Feature encoding; Locality constrained coding; Group sparse coding; Alternating direction method of multipliers; Action recognition
|
|
|
Miguel Angel Bautista, Oriol Pujol, Fernando De la Torre, & Sergio Escalera. (2018). Error-Correcting Factorization. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 2388–2401.
Abstract: Error Correcting Output Codes (ECOC) is a successful technique in multi-class classification, which is a core problem in Pattern Recognition and Machine Learning. A major advantage of ECOC over other methods is that the multi- class problem is decoupled into a set of binary problems that are solved independently. However, literature defines a general error-correcting capability for ECOCs without analyzing how it distributes among classes, hindering a deeper analysis of pair-wise error-correction. To address these limitations this paper proposes an Error-Correcting Factorization (ECF) method, our contribution is three fold: (I) We propose a novel representation of the error-correction capability, called the design matrix, that enables us to build an ECOC on the basis of allocating correction to pairs of classes. (II) We derive the optimal code length of an ECOC using rank properties of the design matrix. (III) ECF is formulated as a discrete optimization problem, and a relaxed solution is found using an efficient constrained block coordinate descent approach. (IV) Enabled by the flexibility introduced with the design matrix we propose to allocate the error-correction on classes that are prone to confusion. Experimental results in several databases show that when allocating the error-correction to confusable classes ECF outperforms state-of-the-art approaches.
|
|
|
I. Sorodoc, S. Pezzelle, A. Herbelot, Mariella Dimiccoli, & R. Bernardi. (2018). Learning quantification from images: A structured neural architecture. NLE - Natural Language Engineering, 24(3), 363–392.
Abstract: Major advances have recently been made in merging language and vision representations. Most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even pre-school children) can abstract over raw multimodal data to perform certain types of higher level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current neural network strategies model such relations. We propose a task where, given an image and a query expressed by an object–property pair, the system must return a quantifier expressing which proportions of the queried object have the queried property. Our contributions are twofold. First, we show that the best performance on this task involves coupling state-of-the-art attention mechanisms with a network architecture mirroring the logical structure assigned to quantifiers by classic linguistic formalisation. Second, we introduce a new balanced dataset of image scenarios associated with quantification queries, which we hope will foster further research in this area.
|
|
|
Maedeh Aghaei, Mariella Dimiccoli, C. Canton-Ferrer, & Petia Radeva. (2018). Towards social pattern characterization from egocentric photo-streams. CVIU - Computer Vision and Image Understanding, 171, 104–117.
Abstract: Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.
Keywords: Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks
|
|
|
Laura Igual, & Santiago Segui. (2017). Introduction to Data Science – A Python Approach to Concepts, Techniques and Applications. Undergraduate Topics in Computer Science. 978-3-319-50016-4.
|
|
|
Mireia Forns-Nadal, Federico Sem, Anna Mane, Laura Igual, Dani Guinart, & Oscar Vilarroya. (2017). Increased Nucleus Accumbens Volume in First-Episode Psychosis. PRN - Psychiatry Research-Neuroimaging, 263, 57–60.
Abstract: Nucleus accumbens has been reported as a key structure in the neurobiology of schizophrenia. Studies analyzing structural abnormalities have shown conflicting results, possibly related to confounding factors. We investigated the nucleus accumbens volume using manual delimitation in first-episode psychosis (FEP) controlling for age, cannabis use and medication. Thirty-one FEP subjects who were naive or minimally exposed to antipsychotics and a control group were MRI scanned and clinically assessed from baseline to 6 months of follow-up. FEP showed increased relative and total accumbens volumes. Clinical correlations with negative symptoms, duration of untreated psychosis and cannabis use were not significant.
|
|
|
Mireia Sole, Joan Blanco, Debora Gil, G. Fonseka, Richard Frodsham, Oliver Valero, et al. (2017). Análisis 3d de la territorialidad cromosómica en células espermatogénicas: explorando la infertilidad desde un nuevo prisma. ASEBIR - Revista Asociación para el Estudio de la Biología de la Reproducción, 105.
|
|
|
Debora Gil, Rosa Maria Ortiz, Carles Sanchez, & Antoni Rosell. (2018). Objective endoscopic measurements of central airway stenosis. A pilot study. RES - Respiration, 95, 63–69.
Abstract: Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.
Keywords: Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis
|
|
|
Jose M. Armingol, Jorge Alfonso, Nourdine Aliane, Miguel Clavijo, Sergio Campos-Cordobes, Arturo de la Escalera, et al. (2018). Environmental Perception for Intelligent Vehicles. In Intelligent Vehicles. Enabling Technologies and Future Developments (23–101).
Abstract: Environmental perception represents, because of its complexity, a challenge for Intelligent Transport Systems due to the great variety of situations and different elements that can happen in road environments and that must be faced by these systems. In connection with this, so far there are a variety of solutions as regards sensors and methods, so the results of precision, complexity, cost, or computational load obtained by these works are different. In this chapter some systems based on computer vision and laser techniques are presented. Fusion methods are also introduced in order to provide advanced and reliable perception systems.
Keywords: Computer vision; laser techniques; data fusion; advanced driver assistance systems; traffic monitoring systems; intelligent vehicles
|
|
|
Antonio Lopez, David Vazquez, & Gabriel Villalonga. (2018). Data for Training Models, Domain Adaptation. In Intelligent Vehicles. Enabling Technologies and Future Developments (395–436).
Abstract: Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection.
Keywords: Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data
|
|
|
Antonio Lopez, Atsushi Imiya, Tomas Pajdla, & Jose Manuel Alvarez. Computer Vision in Vehicle Technology: Land, Sea & Air.
Abstract: A unified view of the use of computer vision technology for different types of vehicles
Computer Vision in Vehicle Technology focuses on computer vision as on-board technology, bringing together fields of research where computer vision is progressively penetrating: the automotive sector, unmanned aerial and underwater vehicles. It also serves as a reference for researchers of current developments and challenges in areas of the application of computer vision, involving vehicles such as advanced driver assistance (pedestrian detection, lane departure warning, traffic sign recognition), autonomous driving and robot navigation (with visual simultaneous localization and mapping) or unmanned aerial vehicles (obstacle avoidance, landscape classification and mapping, fire risk assessment).
The overall role of computer vision for the navigation of different vehicles, as well as technology to address on-board applications, is analysed.
|
|
|
Pau Riba, Anjan Dutta, Josep Llados, Alicia Fornes, & Sounak Dey. (2017). Improving Information Retrieval in Multiwriter Scenario by Exploiting the Similarity Graph of Document Terms. In 14th International Conference on Document Analysis and Recognition (pp. 475–480).
Abstract: Information Retrieval (IR) is the activity of obtaining information resources relevant to a questioned information. It usually retrieves a set of objects ranked according to the relevancy to the needed fact. In document analysis, information retrieval receives a lot of attention in terms of symbol and word spotting. However, through decades the community mostly focused either on printed or on single writer scenario, where the
state-of-the-art results have achieved reasonable performance on the available datasets. Nevertheless, the existing algorithms do not perform accordingly on multiwriter scenario. A graph representing relations between a set of objects is a structure where each node delineates an individual element and the similarity between them is represented as a weight on the connecting edge. In this paper, we explore different analytics of graphs constructed from words or graphical symbols, such as diffusion, shortest path, etc. to improve the performance of information retrieval methods in multiwriter scenario
Keywords: document terms; information retrieval; affinity graph; graph of document terms; multiwriter; graph diffusion
|
|
|
Anjan Dutta, Pau Riba, Josep Llados, & Alicia Fornes. (2017). Pyramidal Stochastic Graphlet Embedding for Document Pattern Classification. In 14th International Conference on Document Analysis and Recognition (pp. 33–38).
Abstract: Document pattern classification methods using graphs have received a lot of attention because of its robust representation paradigm and rich theoretical background. However, the way of preserving and the process for delineating documents with graphs introduce noise in the rendition of underlying data, which creates instability in the graph representation. To deal with such unreliability in representation, in this paper, we propose Pyramidal Stochastic Graphlet Embedding (PSGE).
Given a graph representing a document pattern, our method first computes a graph pyramid by successively reducing the base graph. Once the graph pyramid is computed, we apply Stochastic Graphlet Embedding (SGE) for each level of the pyramid and combine their embedded representation to obtain a global delineation of the original graph. The consideration of pyramid of graphs rather than just a base graph extends the representational power of the graph embedding, which reduces the instability caused due to noise and distortion. When plugged with support
vector machine, our proposed PSGE has outperformed the state-of-the-art results in recognition of handwritten words as well as graphical symbols
Keywords: graph embedding; hierarchical graph representation; graph clustering; stochastic graphlet embedding; graph classification
|
|
|
Juan Ignacio Toledo, Sounak Dey, Alicia Fornes, & Josep Llados. (2017). Handwriting Recognition by Attribute embedding and Recurrent Neural Networks. In 14th International Conference on Document Analysis and Recognition (pp. 1038–1043).
Abstract: Handwriting recognition consists in obtaining the transcription of a text image. Recent word spotting methods based on attribute embedding have shown good performance when recognizing words. However, they are holistic methods in the sense that they recognize the word as a whole (i.e. they find the closest word in the lexicon to the word image). Consequently,
these kinds of approaches are not able to deal with out of vocabulary words, which are common in historical manuscripts. Also, they cannot be extended to recognize text lines. In order to address these issues, in this paper we propose a handwriting recognition method that adapts the attribute embedding to sequence learning. Concretely, the method learns the attribute embedding of patches of word images with a convolutional neural network. Then, these embeddings are presented as a sequence to a recurrent neural network that produces the transcription. We obtain promising results even without the use of any kind of dictionary or language model
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza, & Alicia Fornes. (2017). Optical Music Recognition by Recurrent Neural Networks. In 14th IAPR International Workshop on Graphics Recognition (pp. 25–26).
Abstract: Optical Music Recognition is the task of transcribing a music score into a machine readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level
Keywords: Optical Music Recognition; Recurrent Neural Network; Long Short-Term Memory
|
|