Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Sangheeta Roy; Palaiahnakote Shivakumara; Namita Jain; Vijeta Khare; Anjan Dutta; Umapada Pal; Tong Lu | ||||
Title | Rough-Fuzzy based Scene Categorization for Text Detection and Recognition in Video | Type | Journal Article | ||
Year | 2018 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 80 | Issue | Pages | 64-82 | |
Keywords | Rough set; Fuzzy set; Video categorization; Scene image classification; Video text detection; Video text recognition | ||||
Abstract | Scene image or video understanding is a challenging task especially when number of video types increases drastically with high variations in background and foreground. This paper proposes a new method for categorizing scene videos into different classes, namely, Animation, Outlet, Sports, e-Learning, Medical, Weather, Defense, Economics, Animal Planet and Technology, for the performance improvement of text detection and recognition, which is an effective approach for scene image or video understanding. For this purpose, at first, we present a new combination of rough and fuzzy concept to study irregular shapes of edge components in input scene videos, which helps to classify edge components into several groups. Next, the proposed method explores gradient direction information of each pixel in each edge component group to extract stroke based features by dividing each group into several intra and inter planes. We further extract correlation and covariance features to encode semantic features located inside planes or between planes. Features of intra and inter planes of groups are then concatenated to get a feature matrix. Finally, the feature matrix is verified with temporal frames and fed to a neural network for categorization. Experimental results show that the proposed method outperforms the existing state-of-the-art methods, at the same time, the performances of text detection and recognition methods are also improved significantly due to categorization. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RSJ2018 | Serial | 3096 | ||
Permanent link to this record | |||||
Author | Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell | ||||
Title | Objective endoscopic measurements of central airway stenosis. A pilot study | Type | Journal Article | ||
Year | 2018 | Publication | Respiration | Abbreviated Journal | RES |
Volume | 95 | Issue | Pages | 63–69 | |
Keywords | Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis | ||||
Abstract | Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.075; 600.096; 600.145 | Approved | no | ||
Call Number | Admin @ si @ GOS2018 | Serial | 3043 | ||
Permanent link to this record | |||||
Author | Marta Diez-Ferrer; Debora Gil; Cristian Tebe; Carles Sanchez | ||||
Title | Positive Airway Pressure to Enhance Computed Tomography Imaging for Airway Segmentation for Virtual Bronchoscopic Navigation | Type | Journal Article | ||
Year | 2018 | Publication | Respiration | Abbreviated Journal | RES |
Volume | 96 | Issue | 6 | Pages | 525-534 |
Keywords | Multidetector computed tomography; Bronchoscopy; Continuous positive airway pressure; Image enhancement; Virtual bronchoscopic navigation | ||||
Abstract | Abstract
RATIONALE: Virtual bronchoscopic navigation (VBN) guidance to peripheral pulmonary lesions is often limited by insufficient segmentation of the peripheral airways. OBJECTIVES: To test the effect of applying positive airway pressure (PAP) during CT acquisition to improve segmentation, particularly at end-expiration. METHODS: CT acquisitions in inspiration and expiration with 4 PAP protocols were recorded prospectively and compared to baseline inspiratory acquisitions in 20 patients. The 4 protocols explored differences between devices (flow vs. turbine), exposures (within seconds vs. 15-min) and pressure levels (10 vs. 14 cmH2O). Segmentation quality was evaluated with the number of airways and number of endpoints reached. A generalized mixed-effects model explored the estimated effect of each protocol. MEASUREMENTS AND MAIN RESULTS: Patient characteristics and lung function did not significantly differ between protocols. Compared to baseline inspiratory acquisitions, expiratory acquisitions after 15 min of 14 cmH2O PAP segmented 1.63-fold more airways (95% CI 1.07-2.48; p = 0.018) and reached 1.34-fold more endpoints (95% CI 1.08-1.66; p = 0.004). Inspiratory acquisitions performed immediately under 10 cmH2O PAP reached 1.20-fold (95% CI 1.09-1.33; p < 0.001) more endpoints; after 15 min the increase was 1.14-fold (95% CI 1.05-1.24; p < 0.001). CONCLUSIONS: CT acquisitions with PAP segment more airways and reach more endpoints than baseline inspiratory acquisitions. The improvement is particularly evident at end-expiration after 15 min of 14 cmH2O PAP. Further studies must confirm that the improvement increases diagnostic yield when using VBN to evaluate peripheral pulmonary lesions. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.145 | Approved | no | ||
Call Number | Admin @ si @ DGT2018 | Serial | 3135 | ||
Permanent link to this record | |||||
Author | Sumit K. Banchhor; Narendra D. Londhe; Tadashi Araki; Luca Saba; Petia Radeva; Narendra N. Khanna; Jasjit S. Suri | ||||
Title | Calcium detection, its quantification, and grayscale morphology-based risk stratification using machine learning in multimodality big data coronary and carotid scans: A review. | Type | Journal Article | ||
Year | 2018 | Publication | Computers in Biology and Medicine | Abbreviated Journal | CBM |
Volume | 101 | Issue | Pages | 184-198 | |
Keywords | Heart disease; Stroke; Atherosclerosis; Intravascular; Coronary; Carotid; Calcium; Morphology; Risk stratification | ||||
Abstract | Purpose of review
Atherosclerosis is the leading cause of cardiovascular disease (CVD) and stroke. Typically, atherosclerotic calcium is found during the mature stage of the atherosclerosis disease. It is therefore often a challenge to identify and quantify the calcium. This is due to the presence of multiple components of plaque buildup in the arterial walls. The American College of Cardiology/American Heart Association guidelines point to the importance of calcium in the coronary and carotid arteries and further recommend its quantification for the prevention of heart disease. It is therefore essential to stratify the CVD risk of the patient into low- and high-risk bins. Recent finding Calcium formation in the artery walls is multifocal in nature with sizes at the micrometer level. Thus, its detection requires high-resolution imaging. Clinical experience has shown that even though optical coherence tomography offers better resolution, intravascular ultrasound still remains an important imaging modality for coronary wall imaging. For a computer-based analysis system to be complete, it must be scientifically and clinically validated. This study presents a state-of-the-art review (condensation of 152 publications after examining 200 articles) covering the methods for calcium detection and its quantification for coronary and carotid arteries, the pros and cons of these methods, and the risk stratification strategies. The review also presents different kinds of statistical models and gold standard solutions for the evaluation of software systems useful for calcium detection and quantification. Finally, the review concludes with a possible vision for designing the next-generation system for better clinical outcomes. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ BLA2018 | Serial | 3188 | ||
Permanent link to this record | |||||
Author | F.Negin; Pau Rodriguez; M.Koperski; A.Kerboua; Jordi Gonzalez; J.Bourgeois; E.Chapoulie; P.Robert; F.Bremond | ||||
Title | PRAXIS: Towards automatic cognitive assessment using gesture recognition | Type | Journal Article | ||
Year | 2018 | Publication | Expert Systems with Applications | Abbreviated Journal | ESWA |
Volume | 106 | Issue | Pages | 21-35 | |
Keywords | |||||
Abstract | Praxis test is a gesture-based diagnostic test which has been accepted as diagnostically indicative of cortical pathologies such as Alzheimer’s disease. Despite being simple, this test is oftentimes skipped by the clinicians. In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the test procedures for computer-assisted cognitive assessment of older adults.
In order to carry out gesture recognition as well as correctness assessment of the performances we have recollected a novel challenging RGB-D gesture video dataset recorded by Kinect v2, which contains 29 specific gestures suggested by clinicians and recorded from both experts and patients performing the gesture set. Moreover, we propose a framework to learn the dynamics of upper-body gestures, considering the videos as sequences of short-term clips of gestures. Our approach first uses body part detection to extract image patches surrounding the hands and then, by means of a fine-tuned convolutional neural network (CNN) model, it learns deep hand features which are then linked to a long short-term memory to capture the temporal dependencies between video frames. We report the results of four developed methods using different modalities. The experiments show effectiveness of our deep learning based approach in gesture recognition and performance assessment tasks. Satisfaction of clinicians from the assessment reports indicates the impact of framework corresponding to the diagnosis. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ NRK2018 | Serial | 3669 | ||
Permanent link to this record | |||||
Author | Thanh Nam Le; Muhammad Muzzamil Luqman; Anjan Dutta; Pierre Heroux; Christophe Rigaud; Clement Guerin; Pasquale Foggia; Jean Christophe Burie; Jean Marc Ogier; Josep Llados; Sebastien Adam | ||||
Title | Subgraph spotting in graph representations of comic book images | Type | Journal Article | ||
Year | 2018 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 112 | Issue | Pages | 118-124 | |
Keywords | Attributed graph; Region adjacency graph; Graph matching; Graph isomorphism; Subgraph isomorphism; Subgraph spotting; Graph indexing; Graph retrieval; Query by example; Dataset and comic book images | ||||
Abstract | Graph-based representations are the most powerful data structures for extracting, representing and preserving the structural information of underlying data. Subgraph spotting is an interesting research problem, especially for studying and investigating the structural information based content-based image retrieval (CBIR) and query by example (QBE) in image databases. In this paper we address the problem of lack of freely available ground-truthed datasets for subgraph spotting and present a new dataset for subgraph spotting in graph representations of comic book images (SSGCI) with its ground-truth and evaluation protocol. Experimental results of two state-of-the-art methods of subgraph spotting are presented on the new SSGCI dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ LLD2018 | Serial | 3150 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Jordi Gonzalez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon | ||||
Title | Looking at People Special Issue | Type | Journal Article | ||
Year | 2018 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 126 | Issue | 2-4 | Pages | 141-143 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; ISE; 600.119 | Approved | no | ||
Call Number | Admin @ si @ EGJ2018 | Serial | 3093 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Feedback and Surround Modulated Boundary Detection | Type | Journal Article | ||
Year | 2018 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 126 | Issue | 12 | Pages | 1367–1380 |
Keywords | Boundary detection; Surround modulation; Biologically-inspired vision | ||||
Abstract | Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT; 600.068; 600.072 | Approved | no | ||
Call Number | Admin @ si @ AkP2018b | Serial | 2991 | ||
Permanent link to this record | |||||
Author | Adrien Gaidon; Antonio Lopez; Florent Perronnin | ||||
Title | The Reasonable Effectiveness of Synthetic Visual Data | Type | Journal Article | ||
Year | 2018 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 126 | Issue | 9 | Pages | 899–901 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ GLP2018 | Serial | 3180 | ||
Permanent link to this record | |||||
Author | Joan Serrat; Felipe Lumbreras; Idoia Ruiz | ||||
Title | Learning to measure for preshipment garment sizing | Type | Journal Article | ||
Year | 2018 | Publication | Measurement | Abbreviated Journal | MEASURE |
Volume | 130 | Issue | Pages | 327-339 | |
Keywords | Apparel; Computer vision; Structured prediction; Regression | ||||
Abstract | Clothing is still manually manufactured for the most part nowadays, resulting in discrepancies between nominal and real dimensions, and potentially ill-fitting garments. Hence, it is common in the apparel industry to manually perform measures at preshipment time. We present an automatic method to obtain such measures from a single image of a garment that speeds up this task. It is generic and extensible in the sense that it does not depend explicitly on the garment shape or type. Instead, it learns through a probabilistic graphical model to identify the different contour parts. Subsequently, a set of Lasso regressors, one per desired measure, can predict the actual values of the measures. We present results on a dataset of 130 images of jackets and 98 of pants, of varying sizes and styles, obtaining 1.17 and 1.22 cm of mean absolute error, respectively. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; MSIAU; 600.122; 600.118 | Approved | no | ||
Call Number | Admin @ si @ SLR2018 | Serial | 3128 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Matthieu Molinier; Jorma Laaksonen | ||||
Title | Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification | Type | Journal Article | ||
Year | 2018 | Publication | ISPRS Journal of Photogrammetry and Remote Sensing | Abbreviated Journal | ISPRS J |
Volume | 138 | Issue | Pages | 74-85 | |
Keywords | Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis | ||||
Abstract | Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ RKW2018 | Serial | 3158 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Francesc J. Ferri; Aura Hernandez-Sabate | ||||
Title | An overview of incremental feature extraction methods based on linear subspaces | Type | Journal Article | ||
Year | 2018 | Publication | Knowledge-Based Systems | Abbreviated Journal | KBS |
Volume | 145 | Issue | Pages | 219-235 | |
Keywords | |||||
Abstract | With the massive explosion of machine learning in our day-to-day life, incremental and adaptive learning has become a major topic, crucial to keep up-to-date and improve classification models and their corresponding feature extraction processes. This paper presents a categorized overview of incremental feature extraction based on linear subspace methods which aim at incorporating new information to the already acquired knowledge without accessing previous data. Specifically, this paper focuses on those linear dimensionality reduction methods with orthogonal matrix constraints based on global loss function, due to the extensive use of their batch approaches versus other linear alternatives. Thus, we cover the approaches derived from Principal Components Analysis, Linear Discriminative Analysis and Discriminative Common Vector methods. For each basic method, its incremental approaches are differentiated according to the subspace model and matrix decomposition involved in the updating process. Besides this categorization, several updating strategies are distinguished according to the amount of data used to update and to the fact of considering a static or dynamic number of classes. Moreover, the specific role of the size/dimension ratio in each method is considered. Finally, computational complexity, experimental setup and the accuracy rates according to published results are compiled and analyzed, and an empirical evaluation is done to compare the best approach of each kind. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0950-7051 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ DFH2018 | Serial | 3090 | ||
Permanent link to this record | |||||
Author | Ivet Rafegas; Maria Vanrell | ||||
Title | Color encoding in biologically-inspired convolutional neural networks | Type | Journal Article | ||
Year | 2018 | Publication | Vision Research | Abbreviated Journal | VR |
Volume | 151 | Issue | Pages | 7-17 | |
Keywords | Color coding; Computer vision; Deep learning; Convolutional neural networks | ||||
Abstract | Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC; 600.051; 600.087 | Approved | no | ||
Call Number | Admin @ si @RaV2018 | Serial | 3114 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; C. Canton-Ferrer; Petia Radeva | ||||
Title | Towards social pattern characterization from egocentric photo-streams | Type | Journal Article | ||
Year | 2018 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 171 | Issue | Pages | 104-117 | |
Keywords | Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks | ||||
Abstract | Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ ADC2018 | Serial | 3022 | ||
Permanent link to this record | |||||
Author | Pichao Wang; Wanqing Li; Philip Ogunbona; Jun Wan; Sergio Escalera | ||||
Title | RGB-D-based Human Motion Recognition with Deep Learning: A Survey | Type | Journal Article | ||
Year | 2018 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 171 | Issue | Pages | 118-139 | |
Keywords | Human motion recognition; RGB-D data; Deep learning; Survey | ||||
Abstract | Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ WLO2018 | Serial | 3123 | ||
Permanent link to this record |