Home | << 1 >> |
Records | |||||
---|---|---|---|---|---|
Author | Albert Clapes | ||||
Title | Learning to recognize human actions: from hand-crafted to deep-learning based visual representations | Type | Book Whole | ||
Year | 2019 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Action recognition is a very challenging and important problem in computer vision. Researchers working on this field aspire to provide computers with the abil ity to visually perceive human actions – that is, to observe, interpret, and under stand human-related events that occur in the physical environment merely from visual data. The applications of this technology are numerous: human-machine interaction, e-health, monitoring/surveillance, and content-based video retrieval, among others. Hand-crafted methods dominated the field until the apparition of the first successful deep learning-based action recognition works. Although ear lier deep-based methods underperformed with respect to hand-crafted approaches, these slowly but steadily improved to become state-of-the-art, eventually achieving better results than hand-crafted ones. Still, hand-crafted approaches can be advan tageous in certain scenarios, specially when not enough data is available to train very large deep models or simply to be combined with deep-based methods to fur ther boost the performance. Hence, showing how hand-crafted features can provide extra knowledge the deep networks are notable to easily learn about human actions.
This Thesis concurs in time with this change of paradigm and, hence, reflects it into two distinguished parts. In the first part, we focus on improving current suc cessful hand-crafted approaches for action recognition and we do so from three dif ferent perspectives. Using the dense trajectories framework as a backbone: first, we explore the use of multi-modal and multi-view input data to enrich the trajectory de scriptors. Second, we focus on the classification part of action recognition pipelines and propose an ensemble learning approach, where each classifier leams from a different set of local spatiotemporal features to then combine their outputs following an strategy based on the Dempster-Shaffer Theory. And third, we propose a novel hand-crafted feature extraction method that constructs a rnid-level feature descrip tion to better modellong-term spatiotemporal dynarnics within action videos. Moving to the second part of the Thesis, we start with a comprehensive study of the current deep-learning based action recognition methods. We review both fun damental and cutting edge methodologies reported during the last few years and introduce a taxonomy of deep-leaming methods dedicated to action recognition. In particular, we analyze and discuss how these handle the temporal dimension of data. Last but not least, we propose a residual recurrent network for action recogni tion that naturally integrates all our previous findings in a powerful and prornising framework. |
||||
Address | January 2019 | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Sergio Escalera | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-948531-2-8 | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ Cla2019 | Serial | 3219 | ||
Permanent link to this record | |||||
Author | Santiago Segui | ||||
Title | Contributions to the Diagnosis of Intestinal Motility by Automatic Image Analysis | Type | Book Whole | ||
Year | 2011 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In the early twenty first century Given Imaging Ltd. presented wireless capsule endoscopy (WCE) as a new technological breakthrough that allowed the visualization of
the intestine by using a small, swallowed camera. This small size device was received with a high enthusiasm within the medical community, and until now, it is still one of the medical devices with the highest use growth rate. WCE can be used as a novel diagnostic tool that presents several clinical advantages, since it is non-invasive and at the same time it provides, for the first time, a full picture of the small bowel morphology, contents and dynamics. Since its appearance, the WCE has been used to detect several intestinal dysfunctions such as: polyps, ulcers and bleeding. However, the visual analysis of WCE videos presents an important drawback: the long time required by the physicians for proper video visualization. In this sense and regarding to this limitation, the development of computer aided systems is required for the extensive use of WCE in the medical community. The work presented in this thesis is a set of contributions for the automatic image analysis and computer-aided diagnosis of intestinal motility disorders using WCE. Until now, the diagnosis of small bowel motility dysfunctions was basically performed by invasive techniques such as the manometry test, which can only be conducted at some referral centers around the world owing to the complexity of the procedure and the medial expertise required in the interpretation of the results. Our contributions are divided in three main blocks: 1. Image analysis by computer vision techniques to detect events in the endoluminal WCE scene. Several methods have been proposed to detect visual events such as: intestinal contractions, intestinal content, tunnel and wrinkles; 2. Machine learning techniques for the analysis and the manipulation of the data from WCE. These methods have been proposed in order to overcome the problems that the analysis of WCE presents such as: video acquisition cost, unlabeled data and large number of data; 3. Two different systems for the computer-aided diagnosis of intestinal motility disorders using WCE. The first system presents a fully automatic method that aids at discriminating healthy subjects from patients with severe intestinal motor disorders like pseudo-obstruction or food intolerance. The second system presents another automatic method that models healthy subjects and discriminate them from mild intestinal motility patients. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Jordi Vitria | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Seg2011 | Serial | 1836 | ||
Permanent link to this record | |||||
Author | Pierluigi Casale | ||||
Title | Approximate Ensemble Methods for Physical Activity Recognition Applications | Type | Book Whole | ||
Year | 2011 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The main interest of this thesis focuses on computational methodologies able to
reduce the degree of complexity of learning algorithms and its application to physical activity recognition. Random Projections will be used to reduce the computational complexity in Multiple Classifier Systems. A new boosting algorithm and a new one-class classification methodology have been developed. In both cases, random projections are used for reducing the dimensionality of the problem and for generating diversity, exploiting in this way the benefits that ensembles of classifiers provide in terms of performances and stability. Moreover, the new one-class classification methodology, based on an ensemble strategy able to approximate a multidimensional convex-hull, has been proved to over-perform state-of-the-art one-class classification methodologies. The practical focus of the thesis is towards Physical Activity Recognition. A new hardware platform for wearable computing application has been developed and used for collecting data of activities of daily living allowing to study the optimal features set able to successful classify activities. Based on the classification methodologies developed and the study conducted on physical activity classification, a machine learning architecture capable to provide a continuous authentication mechanism for mobile-devices users has been worked out, as last part of the thesis. The system, based on a personalized classifier, states on the analysis of the characteristic gait patterns typical of each individual ensuring an unobtrusive and continuous authentication mechanism |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Oriol Pujol;Petia Radeva | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Cas2011 | Serial | 1837 | ||
Permanent link to this record | |||||
Author | David Vazquez | ||||
Title | Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | 1 | Issue | 1 | Pages | 1-105 |
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract | Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area. | ||||
Address | Barcelona | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Barcelona | Editor | Antonio Lopez;Daniel Ponsa |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940530-1-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | adas | Approved | yes | ||
Call Number | ADAS @ adas @ Vaz2013 | Serial | 2276 | ||
Permanent link to this record | |||||
Author | Francesco Ciompi | ||||
Title | Multi-Class Learning for Vessel Characterization in Intravascular Ultrasound | Type | Book Whole | ||
Year | 2012 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this thesis we tackle the problem of automatic characterization of human coronary vessel in Intravascular Ultrasound (IVUS) image modality. The basis for the whole characterization process is machine learning applied to multi-class problems. In all the presented approaches, the Error-Correcting Output Codes (ECOC) framework is used as central element for the design of multi-class classifiers.
Two main topics are tackled in this thesis. First, the automatic detection of the vessel borders is presented. For this purpose, a novel context-aware classifier for multi-class classification of the vessel morphology is presented, namely ECOC-DRF. Based on ECOC-DRF, the lumen border and the media-adventitia border in IVUS are robustly detected by means of a novel holistic approach, achieving an error comparable with inter-observer variability and with state of the art methods. The two vessel borders define the atheroma area of the vessel. In this area, tissue characterization is required. For this purpose, we present a framework for automatic plaque characterization by processing both texture in IVUS images and spectral information in raw Radio Frequency data. Furthermore, a novel method for fusing in-vivo and in-vitro IVUS data for plaque characterization is presented, namely pSFFS. The method demonstrates to effectively fuse data generating a classifier that improves the tissue characterization in both in-vitro and in-vivo datasets. A novel method for automatic video summarization in IVUS sequences is also presented. The method aims to detect the key frames of the sequence, i.e., the frames representative of morphological changes. This novel method represents the basis for video summarization in IVUS as well as the markers for the partition of the vessel into morphological and clinically interesting events. Finally, multi-class learning based on ECOC is applied to lung tissue characterization in Computed Tomography. The novel proposed approach, based on supervised and unsupervised learning, achieves accurate tissue classification on a large and heterogeneous dataset. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Petia Radeva;Oriol Pujol | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Cio2012 | Serial | 2146 | ||
Permanent link to this record | |||||
Author | Marina Alberti | ||||
Title | Detection and Alignment of Vascular Structures in Intravascular Ultrasound using Pattern Recognition Techniques | Type | Book Whole | ||
Year | 2013 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this thesis, several methods for the automatic analysis of Intravascular Ultrasound
(IVUS) sequences are presented, aimed at assisting physicians in the diagnosis, the assessment of the intervention and the monitoring of the patients with coronary disease. The basis for the developed frameworks are machine learning, pattern recognition and image processing techniques. First, a novel approach for the automatic detection of vascular bifurcations in IVUS is presented. The task is addressed as a binary classication problem (identifying bifurcation and non-bifurcation angular sectors in the sequence images). The multiscale stacked sequential learning algorithm is applied, to take into account the spatial and temporal context in IVUS sequences, and the results are rened using a-priori information about branching dimensions and geometry. The achieved performance is comparable to intra- and inter-observer variability. Then, we propose a novel method for the automatic non-rigid alignment of IVUS sequences of the same patient, acquired at dierent moments (before and after percutaneous coronary intervention, or at baseline and follow-up examinations). The method is based on the description of the morphological content of the vessel, obtained by extracting temporal morphological proles from the IVUS acquisitions, by means of methods for segmentation, characterization and detection in IVUS. A technique for non-rigid sequence alignment – the Dynamic Time Warping algorithm - is applied to the proles and adapted to the specic clinical problem. Two dierent robust strategies are proposed to address the partial overlapping between frames of corresponding sequences, and a regularization term is introduced to compensate for possible errors in the prole extraction. The benets of the proposed strategy are demonstrated by extensive validation on synthetic and in-vivo data. The results show the interest of the proposed non-linear alignment and the clinical value of the method. Finally, a novel automatic approach for the extraction of the luminal border in IVUS images is presented. The method applies the multiscale stacked sequential learning algorithm and extends it to 2-D+T, in a rst classication phase (the identi- cation of lumen and non-lumen regions of the images), while an active contour model is used in a second phase, to identify the lumen contour. The method is extended to the longitudinal dimension of the sequences and it is validated on a challenging data-set. |
||||
Address | Barcelona | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Simone Balocco;Petia Radeva | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Alb2013 | Serial | 2215 | ||
Permanent link to this record | |||||
Author | Sergio Escalera | ||||
Title | Coding and Decoding Design of ECOCs for Multi-class Pattern and Object Recognition A | Type | Book Whole | ||
Year | 2008 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Many real problems require multi-class decisions. In the Pattern Recognition field,
many techniques have been proposed to deal with the binary problem. However, the extension of many 2-class classifiers to the multi-class case is a hard task. In this sense, Error-Correcting Output Codes (ECOC) demonstrated to be a powerful tool to combine any number of binary classifiers to model multi-class problems. But there are still many open issues about the capabilities of the ECOC framework. In this thesis, the two main stages of an ECOC design are analyzed: the coding and the decoding steps. We present different problem-dependent designs. These designs take advantage of the knowledge of the problem domain to minimize the number of classifiers, obtaining a high classification performance. On the other hand, we analyze the ECOC codification in order to define new decoding rules that take full benefit from the information provided at the coding step. Moreover, as a successful classification requires a rich feature set, new feature detection/extraction techniques are presented and evaluated on the new ECOC designs. The evaluation of the new methodology is performed on different real and synthetic data sets: UCI Machine Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, Intravascular Ultrasound images, Caltech Repository data set or Chaga’s disease data set. The results of this thesis show that significant performance improvements are obtained on both traditional coding and decoding ECOC designs when the new coding and decoding rules are taken into account. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Petia Radeva;Oriol Pujol | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; HuPBA | Approved | no | ||
Call Number | Admin @ si @ Esc2008b | Serial | 2217 | ||
Permanent link to this record | |||||
Author | Michal Drozdzal | ||||
Title | Sequential image analysis for computer-aided wireless endoscopy | Type | Book Whole | ||
Year | 2014 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Wireless Capsule Endoscopy (WCE) is a technique for inner-visualization of the entire small intestine and, thus, offers an interesting perspective on intestinal motility. The two major drawbacks of this technique are: 1) huge amount of data acquired by WCE makes the motility analysis tedious and 2) since the capsule is the first tool that offers complete inner-visualization of the small intestine,the exact importance of the observed events is still an open issue. Therefore, in this thesis, a novel computer-aided system for intestinal motility analysis is presented. The goal of the system is to provide an easily-comprehensible visual description of motility-related intestinal events to a physician. In order to do so, several tools based either on computer vision concepts or on machine learning techniques are presented. A method for transforming 3D video signal to a holistic image of intestinal motility, called motility bar, is proposed. The method calculates the optimal mapping from video into image from the intestinal motility point of view.
To characterize intestinal motility, methods for automatic extraction of motility information from WCE are presented. Two of them are based on the motility bar and two of them are based on frame-per-frame analysis. In particular, four algorithms dealing with the problems of intestinal contraction detection, lumen size estimation, intestinal content characterization and wrinkle frame detection are proposed and validated. The results of the algorithms are converted into sequential features using an online statistical test. This test is designed to work with multivariate data streams. To this end, we propose a novel formulation of concentration inequality that is introduced into a robust adaptive windowing algorithm for multivariate data streams. The algorithm is used to obtain robust representation of segments with constant intestinal motility activity. The obtained sequential features are shown to be discriminative in the problem of abnormal motility characterization. Finally, we tackle the problem of efficient labeling. To this end, we incorporate active learning concepts to the problems present in WCE data and propose two approaches. The first one is based the concepts of sequential learning and the second one adapts the partition-based active learning to an error-free labeling scheme. All these steps are sufficient to provide an extensive visual description of intestinal motility that can be used by an expert as decision support system. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Petia Radeva | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-3-3 | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Dro2014 | Serial | 2486 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez | ||||
Title | From pixels to gestures: learning visual representations for human analysis in color and depth data sequences | Type | Book Whole | ||
Year | 2015 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others.
In this dissertation we are interested in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RGB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition. First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on Graph cuts optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts. At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body. In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually limiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets. Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences. A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains. |
||||
Address | January 2015 | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Sergio Escalera;Stan Sclaroff | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-0-2 | Medium | ||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ Her2015 | Serial | 2576 | ||
Permanent link to this record | |||||
Author | Adriana Romero | ||||
Title | Assisting the training of deep neural networks with applications to computer vision | Type | Book Whole | ||
Year | 2015 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Deep learning has recently been enjoying an increasing popularity due to its success in solving challenging tasks. In particular, deep learning has proven to be effective in a large variety of computer vision tasks, such as image classification, object recognition and image parsing. Contrary to previous research, which required engineered feature representations, designed by experts, in order to succeed, deep learning attempts to learn representation hierarchies automatically from data. More recently, the trend has been to go deeper with representation hierarchies.
Learning (very) deep representation hierarchies is a challenging task, which involves the optimization of highly non-convex functions. Therefore, the search for algorithms to ease the learning of (very) deep representation hierarchies from data is extensive and ongoing. In this thesis, we tackle the challenging problem of easing the learning of (very) deep representation hierarchies. We present a hyper-parameter free, off-the-shelf, simple and fast unsupervised algorithm to discover hidden structure from the input data by enforcing a very strong form of sparsity. We study the applicability and potential of the algorithm to learn representations of varying depth in a handful of applications and domains, highlighting the ability of the algorithm to provide discriminative feature representations that are able to achieve top performance. Yet, while emphasizing the great value of unsupervised learning methods when labeled data is scarce, the recent industrial success of deep learning has revolved around supervised learning. Supervised learning is currently the focus of many recent research advances, which have shown to excel at many computer vision tasks. Top performing systems often involve very large and deep models, which are not well suited for applications with time or memory limitations. More in line with the current trends, we engage in making top performing models more efficient, by designing very deep and thin models. Since training such very deep models still appears to be a challenging task, we introduce a novel algorithm that guides the training of very thin and deep models by hinting their intermediate representations. Very deep and thin models trained by the proposed algorithm end up extracting feature representations that are comparable or even better performing than the ones extracted by large state-of-the-art models, while compellingly reducing the time and memory consumption of the model. |
||||
Address | October 2015 | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Carlo Gatta;Petia Radeva | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ Rom2015 | Serial | 2707 | ||
Permanent link to this record |