Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | M. Cruz; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa | ||||
Title | Cross-spectral image registration and fusion: an evaluation study | Type | Conference Article | ||
Year | 2015 | Publication | 2nd International Conference on Machine Vision and Machine Learning | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | multispectral imaging; image registration; data fusion; infrared and visible spectra | ||||
Abstract | This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented. |
||||
Address | Barcelona; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MVML | ||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ CAV2015 | Serial | 2629 | ||
Permanent link to this record | |||||
Author | G.Thorvaldsen; Joana Maria Pujadas-Mora; T.Andersen ; L.Eikvil; Josep Llados; Alicia Fornes; Anna Cabre | ||||
Title | A Tale of two Transcriptions | Type | Journal | ||
Year | 2015 | Publication | Historical Life Course Studies | Abbreviated Journal | |
Volume | 2 | Issue | Pages | 1-19 | |
Keywords | Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting | ||||
Abstract | non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2352-6343 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.077; 602.006 | Approved | no | ||
Call Number | Admin @ si @ TPA2015 | Serial | 2582 | ||
Permanent link to this record | |||||
Author | Ivan Huerta; Marco Pedersoli; Jordi Gonzalez; Alberto Sanfeliu | ||||
Title | Combining where and what in change detection for unsupervised foreground learning in surveillance | Type | Journal Article | ||
Year | 2015 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 48 | Issue | 3 | Pages | 709-719 |
Keywords | Object detection; Unsupervised learning; Motion segmentation; Latent variables; Support vector machine; Multiple appearance models; Video surveillance | ||||
Abstract | Change detection is the most important task for video surveillance analytics such as foreground and anomaly detection. Current foreground detectors learn models from annotated images since the goal is to generate a robust foreground model able to detect changes in all possible scenarios. Unfortunately, manual labelling is very expensive. Most advanced supervised learning techniques based on generic object detection datasets currently exhibit very poor performance when applied to surveillance datasets because of the unconstrained nature of such environments in terms of types and appearances of objects. In this paper, we take advantage of change detection for training multiple foreground detectors in an unsupervised manner. We use statistical learning techniques which exploit the use of latent parameters for selecting the best foreground model parameters for a given scenario. In essence, the main novelty of our proposed approach is to combine the where (motion segmentation) and what (learning procedure) in change detection in an unsupervised way for improving the specificity and generalization power of foreground detectors at the same time. We propose a framework based on latent support vector machines that, given a noisy initialization based on motion cues, learns the correct position, aspect ratio, and appearance of all moving objects in a particular scene. Specificity is achieved by learning the particular change detections of a given scenario, and generalization is guaranteed since our method can be applied to any possible scene and foreground object, as demonstrated in the experimental results outperforming the state-of-the-art. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.063; 600.078 | Approved | no | ||
Call Number | Admin @ si @ HPG2015 | Serial | 2589 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Maite Garolera; Petia Radeva | ||||
Title | Object Discovery using CNN Features in Egocentric Videos | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 67-74 | |
Keywords | Object discovery; Egocentric videos; Lifelogging; CNN | ||||
Abstract | Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-19389-2 | Medium | |
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ BGR2015 | Serial | 2596 | ||
Permanent link to this record | |||||
Author | Josep M. Gonfaus; Marco Pedersoli; Jordi Gonzalez; Andrea Vedaldi; Xavier Roca | ||||
Title | Factorized appearances for object detection | Type | Journal Article | ||
Year | 2015 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 138 | Issue | Pages | 92–101 | |
Keywords | Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts | ||||
Abstract | Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure. Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.063; 600.078 | Approved | no | ||
Call Number | Admin @ si @ GPG2015 | Serial | 2705 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez | ||||
Title | Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Intelligent Vehicles Symposium IV2015 | Abbreviated Journal | |
Volume | Issue | Pages | 356-361 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. | ||||
Address | Seoul; Corea; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IV | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVX2015 | Serial | 2625 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3D-Guided Multiscale Sliding Window for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 560-568 | |
Keywords | Pedestrian Detection | ||||
Abstract | The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVR2015 | Serial | 2585 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Debora Gil; Cristina Rodriguez de Miguel; Fernando Vilariño | ||||
Title | WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians | Type | Journal Article | ||
Year | 2015 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 43 | Issue | Pages | 99-111 | |
Keywords | Polyp localization; Energy Maps; Colonoscopy; Saliency; Valley detection | ||||
Abstract | We introduce in this paper a novel polyp localization method for colonoscopy videos. Our method is based on a model of appearance for polyps which defines polyp boundaries in terms of valley information. We propose the integration of valley information in a robust way fostering complete, concave and continuous boundaries typically associated to polyps. This integration is done by using a window of radial sectors which accumulate valley information to create WMDOVA1 energy maps related with the likelihood of polyp presence. We perform a double validation of our maps, which include the introduction of two new databases, including the first, up to our knowledge, fully annotated database with clinical metadata associated. First we assess that the highest value corresponds with the location of the polyp in the image. Second, we show that WM-DOVA energy maps can be comparable with saliency maps obtained from physicians' fixations obtained via an eye-tracker. Finally, we prove that our method outperforms state-of-the-art computational saliency results. Our method shows good performance, particularly for small polyps which are reported to be the main sources of polyp miss-rate, which indicates the potential applicability of our method in clinical practice. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0895-6111 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MV; IAM; 600.047; 600.060; 600.075;SIAI | Approved | no | ||
Call Number | Admin @ si @ BSF2015 | Serial | 2609 | ||
Permanent link to this record | |||||
Author | Wenjuan Gong; Y.Huang; Jordi Gonzalez; Liang Wang | ||||
Title | An Effective Solution to Double Counting Problem in Human Pose Estimation | Type | Miscellaneous | ||
Year | 2015 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Pose estimation; double counting problem; mix-ture of parts Model | ||||
Abstract | The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model or as latent variables for pedestrian detection. Even in the era of massive applications of deep learning techniques, the mixture of parts model is still effective in solving certain problems, especially in the case with limited numbers of training samples. In this paper, we consider using the mixture of parts model for pose estimation, wherein a tree structure is utilized for representing relations between connected body parts. This strategy facilitates training and inferencing of the model but suffers from double counting problems, where one detected body part is counted twice due to lack of constrains among unconnected body parts. To solve this problem, we propose a generalized solution in which various part attributes are captured by multiple features so as to avoid the double counted problem. Qualitative and quantitative experimental results on a public available dataset demonstrate the effectiveness of our proposed method. An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015]. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.078 | Approved | no | ||
Call Number | Admin @ si @ GHG2015 | Serial | 2590 | ||
Permanent link to this record | |||||
Author | Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich | ||||
Title | DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition | Type | Conference Article | ||
Year | 2015 | Publication | Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II | Abbreviated Journal | |
Volume | 9475 | Issue | Pages | 463-473 | |
Keywords | Projector-camera systems; Feature descriptors; Object recognition | ||||
Abstract | Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-27862-9 | Medium | |
Area | Expedition | Conference | ISVC | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ SMG2015 | Serial | 2736 | ||
Permanent link to this record | |||||
Author | Monica Piñol; Angel Sappa; Ricardo Toledo | ||||
Title | Adaptive Feature Descriptor Selection based on a Multi-Table Reinforcement Learning Strategy | Type | Journal Article | ||
Year | 2015 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | 150 | Issue | A | Pages | 106–115 |
Keywords | Reinforcement learning; Q-learning; Bag of features; Descriptors | ||||
Abstract | This paper presents and evaluates a framework to improve the performance of visual object classification methods, which are based on the usage of image feature descriptors as inputs. The goal of the proposed framework is to learn the best descriptor for each image in a given database. This goal is reached by means of a reinforcement learning process using the minimum information. The visual classification system used to demonstrate the proposed framework is based on a bag of features scheme, and the reinforcement learning technique is implemented through the Q-learning approach. The behavior of the reinforcement learning with different state definitions is evaluated. Additionally, a method that combines all these states is formulated in order to select the optimal state. Finally, the chosen actions are obtained from the best set of image descriptors in the literature: PHOW, SIFT, C-SIFT, SURF and Spin. Experimental results using two public databases (ETH and COIL) are provided showing both the validity of the proposed approach and comparisons with state of the art. In all the cases the best results are obtained with the proposed approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ PST2015 | Serial | 2473 | ||
Permanent link to this record | |||||
Author | Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata | ||||
Title | Real-Time Face Detection and Tracking Utilising OpenMP and ROS | Type | Conference Article | ||
Year | 2015 | Publication | 3rd Asia-Pacific Conference on Computer Aided System Engineering | Abbreviated Journal | |
Volume | Issue | Pages | 179 - 184 | ||
Keywords | RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP | ||||
Abstract | The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction. |
||||
Address | Quito; Ecuador; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | APCASE | ||
Notes | NEUROBIT | Approved | no | ||
Call Number | Admin @ si @ TAG2015 | Serial | 2659 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias | ||||
Title | Scene Representations for Autonomous Driving: an approach based on polygonal primitives | Type | Conference Article | ||
Year | 2015 | Publication | 2nd Iberian Robotics Conference ROBOT2015 | Abbreviated Journal | |
Volume | 417 | Issue | Pages | 503-515 | |
Keywords | Scene reconstruction; Point cloud; Autonomous vehicles | ||||
Abstract | In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques. |
||||
Address | Lisboa; Portugal; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ROBOT | ||
Notes | ADAS; 600.076; 600.086 | Approved | no | ||
Call Number | Admin @ si @ OSS2015a | Serial | 2662 | ||
Permanent link to this record | |||||
Author | Michal Drozdzal; Santiago Segui; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria | ||||
Title | Motility bar: a new tool for motility analysis of endoluminal videos | Type | Journal Article | ||
Year | 2015 | Publication | Computers in Biology and Medicine | Abbreviated Journal | CBM |
Volume | 65 | Issue | Pages | 320-330 | |
Keywords | Small intestine; Motility; WCE; Computer vision; Image classification | ||||
Abstract | Wireless Capsule Endoscopy (WCE) provides a new perspective of the small intestine, since it enables, for the first time, visualization of the entire organ. However, the long visual video analysis time, due to the large number of data in a single WCE study, was an important factor impeding the widespread use of the capsule as a tool for intestinal abnormalities detection. Therefore, the introduction of WCE triggered a new field for the application of computational methods, and in particular, of computer vision. In this paper, we follow the computational approach and come up with a new perspective on the small intestine motility problem. Our approach consists of three steps: first, we review a tool for the visualization of the motility information contained in WCE video; second, we propose algorithms for the characterization of two motility building-blocks: contraction detector and lumen size estimation; finally, we introduce an approach to detect segments of stable motility behavior. Our claims are supported by an evaluation performed with 10 WCE videos, suggesting that our methods ably capture the intestinal motility information. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;MV | Approved | no | ||
Call Number | Admin @ si @ DSR2015 | Serial | 2635 | ||
Permanent link to this record | |||||
Author | Alvaro Cepero; Albert Clapes; Sergio Escalera | ||||
Title | Automatic non-verbal communication skills analysis: a quantitative evaluation | Type | Journal Article | ||
Year | 2015 | Publication | AI Communications | Abbreviated Journal | AIC |
Volume | 28 | Issue | 1 | Pages | 87-101 |
Keywords | Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning | ||||
Abstract | The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-7126 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ CCE2015 | Serial | 2549 | ||
Permanent link to this record |