|   | 
Details
   web
Records
Author Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli
Title Batch-based activity recognition from egocentric photo-streams revisited Type Journal Article
Year 2018 Publication Pattern Analysis and Applications Abbreviated Journal PAA
Volume 21 Issue 4 Pages 953–965
Keywords (up) Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks
Abstract Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ CMR2018 Serial 3186
Permanent link to this record
 

 
Author Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Sergi Solera; Petia Radeva
Title Egocentric video description based on temporally-linked sequences Type Journal Article
Year 2018 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR
Volume 50 Issue Pages 205-216
Keywords (up) egocentric vision; video description; deep learning; multi-modal learning
Abstract Egocentric vision consists in acquiring images along the day from a first person point-of-view using wearable cameras. The automatic analysis of this information allows to discover daily patterns for improving the quality of life of the user. A natural topic that arises in egocentric vision is storytelling, that is, how to understand and tell the story relying behind the pictures.
In this paper, we tackle storytelling as an egocentric sequences description problem. We propose a novel methodology that exploits information from temporally neighboring events, matching precisely the nature of egocentric sequences. Furthermore, we present a new method for multimodal data fusion consisting on a multi-input attention recurrent network. We also release the EDUB-SegDesc dataset. This is the first dataset for egocentric image sequences description, consisting of 1,339 events with 3,991 descriptions, from 55 days acquired by 11 people. Finally, we prove that our proposal outperforms classical attentional encoder-decoder methods for video description.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ BPC2018 Serial 3109
Permanent link to this record
 

 
Author Marc Bolaños; R. Mestre; Estefania Talavera; Xavier Giro; Petia Radeva
Title Visual Summary of Egocentric Photostreams by Representative Keyframes Type Conference Article
Year 2015 Publication IEEE International Conference on Multimedia and Expo ICMEW2015 Abbreviated Journal
Volume Issue Pages 1-6
Keywords (up) egocentric; lifelogging; summarization; keyframes
Abstract Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted bymeans of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the
summaries.
Address Torino; italy; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue 978-1-4799-7079-7 Edition
ISSN ISBN 978-1-4799-7079-7 Medium
Area Expedition Conference ICME
Notes MILAB Approved no
Call Number Admin @ si @ BMT2015 Serial 2638
Permanent link to this record
 

 
Author T. Mouats; N. Aouf; Angel Sappa; Cristhian A. Aguilera-Carrasco; Ricardo Toledo
Title Multi-Spectral Stereo Odometry Type Journal Article
Year 2015 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 16 Issue 3 Pages 1210-1224
Keywords (up) Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery
Abstract In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based
on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating
Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1524-9050 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.055; 600.076 Approved no
Call Number Admin @ si @ MAS2015a Serial 2533
Permanent link to this record
 

 
Author Debora Gil; Jose Maria-Carazo; Roberto Marabini
Title On the nature of 2D crystal unbending Type Journal Article
Year 2006 Publication Journal of Structural Biology Abbreviated Journal
Volume 156 Issue 3 Pages 546-555
Keywords (up) Electron microscopy
Abstract Crystal unbending, the process that aims to recover a perfect crystal from experimental data, is one of the more important steps in electron crystallography image processing. The unbending process involves three steps: estimation of the unit cell displacements from their ideal positions, extension of the deformation field to the whole image and transformation of the image in order to recover an ideal crystal. In this work, we present a systematic analysis of the second step oriented to address two issues. First, whether the unit cells remain undistorted and only the distance between them should be changed (rigid case) or should be modified with the same deformation suffered by the whole crystal (elastic case). Second, the performance of different extension algorithms (interpolation versus approximation) is explored. Our experiments show that there is no difference between elastic and rigid cases or among the extension algorithms. This implies that the deformation fields are constant over large areas. Furthermore, our results indicate that the main source of error is the transformation of the crystal image.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1047-8477 ISBN Medium
Area Expedition Conference
Notes IAM; Approved no
Call Number IAM @ iam @ GCM2006 Serial 1519
Permanent link to this record
 

 
Author Debora Gil; Aura Hernandez-Sabate; Antoni Carol; Oriol Rodriguez; Petia Radeva
Title A Deterministic-Statistic Adventitia Detection in IVUS Images Type Conference Article
Year 2005 Publication ESC Congress Abbreviated Journal
Volume Issue Pages
Keywords (up) Electron microscopy; Unbending; 2D crystal; Interpolation; Approximation
Abstract Plaque analysis in IVUS planes needs accurate intima and adventitia models. Large variety in adventitia descriptors difficulties its detection and motivates using a classification strategy for selecting points on the structure. Whatever the set of descriptors used, the selection stage suffers from fake responses due to noise and uncompleted true curves. In order to smooth background noise while strengthening responses, we apply a restricted anisotropic filter that homogenizes grey levels along the image significant structures. Candidate points are extracted by means of a simple semi supervised adaptive classification of the filtered image response to edge and calcium detectors. The final model is obtained by interpolating the former line segments with an anisotropic contour closing technique based on functional extension principles.
Address Stockholm; Sweden; September 2005
Corporate Author Thesis
Publisher Place of Publication ,Sweden (EU) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ESC
Notes IAM;MILAB Approved no
Call Number IAM @ iam @ RMF2005a Serial 1523
Permanent link to this record
 

 
Author Debora Gil; Aura Hernandez-Sabate; Antoni Carol; Oriol Rodriguez; Petia Radeva
Title A Deterministic-Statistic Adventitia Detection in IVUS Images Type Conference Article
Year 2005 Publication 3rd International workshop on International Workshop on Functional Imaging and Modeling of the Heart Abbreviated Journal
Volume Issue Pages 65-74
Keywords (up) Electron microscopy; Unbending; 2D crystal; Interpolation; Approximation
Abstract Plaque analysis in IVUS planes needs accurate intima and adventitia models. Large variety in adventitia descriptors difficulties its detection and motivates using a classification strategy for selecting points on the structure. Whatever the set of descriptors used, the selection stage suffers from fake responses due to noise and uncompleted true curves. In order to smooth background noise while strengthening responses, we apply a restricted anisotropic filter that homogenizes grey levels along the image significant structures. Candidate points are extracted by means of a simple semi supervised adaptive classification of the filtered image response to edge and calcium detectors. The final model is obtained by interpolating the former line segments with an anisotropic contour closing technique based on functional extension principles.
Address Barcelona; June 2005
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FIMH
Notes IAM;MILAB Approved no
Call Number IAM @ iam @ RMF2005 Serial 1524
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Joost Van de Weijer; Manuel Gonzalez-Hidalgo; Harald Skinnemoen; Andrew Bagdanov
Title Review on computer vision techniques in emergency situations Type Journal Article
Year 2018 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 77 Issue 13 Pages 17069–17107
Keywords (up) Emergency management; Computer vision; Decision makers; Situational awareness; Critical situation
Abstract In emergency situations, actions that save lives and limit the impact of hazards are crucial. In order to act, situational awareness is needed to decide what to do. Geolocalized photos and video of the situations as they evolve can be crucial in better understanding them and making decisions faster. Cameras are almost everywhere these days, either in terms of smartphones, installed CCTV cameras, UAVs or others. However, this poses challenges in big data and information overflow. Moreover, most of the time there are no disasters at any given location, so humans aiming to detect sudden situations may not be as alert as needed at any point in time. Consequently, computer vision tools can be an excellent decision support. The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research. Researchers tend to focus on state-of-the-art systems that cover the same emergency as they are studying, obviating important research in other fields. In order to unveil this overlap, the survey is divided along four main axes: the types of emergencies that have been studied in computer vision, the objective that the algorithms can address, the type of hardware needed and the algorithms used. Therefore, this review provides a broad overview of the progress of computer vision covering all sorts of emergencies.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.068; 600.120 Approved no
Call Number Admin @ si @ LWG2018 Serial 3041
Permanent link to this record
 

 
Author Rain Eric Haamer; Eka Rusadze; Iiris Lusi; Tauseef Ahmed; Sergio Escalera; Gholamreza Anbarjafari
Title Review on Emotion Recognition Databases Type Book Chapter
Year 2018 Publication Human-Robot Interaction: Theory and Application Abbreviated Journal
Volume Issue Pages
Keywords (up) emotion; computer vision; databases
Abstract Over the past few decades human-computer interaction has become more important in our daily lives and research has developed in many directions: memory research, depression detection, and behavioural deficiency detection, lie detection, (hidden) emotion recognition etc. Because of that, the number of generic emotion and face databases or those tailored to specific needs have grown immensely large. Thus, a comprehensive yet compact guide is needed to help researchers find the most suitable database and understand what types of databases already exist. In this paper, different elicitation methods are discussed and the databases are primarily organized into neat and informative tables based on the format.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-78923-316-2 Medium
Area Expedition Conference
Notes HUPBA; 602.133 Approved no
Call Number Admin @ si @ HRL2018 Serial 3212
Permanent link to this record
 

 
Author Jorge Bernal; Nima Tajkbaksh; F. Javier Sanchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjorn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Cordova; Cristina Sanchez Montes; Suryakanth R. Gurudu; Gloria Fernandez Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace
Title Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results from the MICCAI 2015 Endoscopic Vision Challenge Type Journal Article
Year 2017 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI
Volume 36 Issue 6 Pages 1231 - 1249
Keywords (up) Endoscopic vision; Polyp Detection; Handcrafted features; Machine Learning; Validation Framework
Abstract Colonoscopy is the gold standard for colon cancer screening though still some polyps are missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack
of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection subchallenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted
Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks (CNNs) are the state of the art. Nevertheless it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV; 600.096; 600.075 Approved no
Call Number Admin @ si @ BTS2017 Serial 2949
Permanent link to this record
 

 
Author Farhan Riaz; Fernando Vilariño; Mario Dinis-Ribeiro; Miguel Coimbraln
Title Identifying Potentially Cancerous Tissues in Chromoendoscopy Images Type Conference Article
Year 2011 Publication 5th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 6669 Issue Pages 709-716
Keywords (up) Endoscopy, Computer Assisted Diagnosis, Gradient.
Abstract The dynamics of image acquisition conditions for gastroenterology imaging scenarios pose novel challenges for automatic computer assisted decision systems. Such systems should have the ability to mimic the tissue characterization of the physicians. In this paper, our objective is to compare some feature extraction methods to classify a Chromoendoscopy image into two different classes: Normal and Potentially cancerous. Results show that LoG filters generally give best classification accuracy among the other feature extraction methods considered.
Address Las Palmas de Gran Canaria. Spain
Corporate Author Thesis
Publisher Springer Place of Publication Berlin Editor J. Vitria, J.M. Sanches, and M. Hernandez
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-642-21256-7 Medium
Area 800 Expedition Conference IbPRIA
Notes MV;SIAI Approved no
Call Number Admin @ si @ RVD2011; IAM @ iam @ RVD2011 Serial 1726
Permanent link to this record
 

 
Author Miguel Angel Bautista; Xavier Baro; Oriol Pujol; Petia Radeva; Jordi Vitria; Sergio Escalera
Title Compact Evolutive Design of Error-Correcting Output Codes Type Conference Article
Year 2010 Publication Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases Abbreviated Journal
Volume Issue Pages 119-128
Keywords (up) Ensemble of Dichotomizers; Error-Correcting Output Codes; Evolutionary optimization
Abstract The classi cation of large number of object categories is a challenging trend in the Machine Learning eld. In literature, this is often addressed using an ensemble of classi ers. In this scope, the Error-Correcting Output Codes framework has demonstrated to be a powerful tool for the combination of classi ers. However, most of the state-of-the-art ECOC approaches use a linear or exponential number of classi ers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a minimal design of ECOC in terms of the number of classi ers. Evolutionary computation is used for tuning the parameters of the classi ers and looking for the best Minimal ECOC code con guration. The results over several public UCI data sets and a challenging multi-class Computer Vision problem show that the proposed methodology obtains comparable and even better results than state-of-the-art ECOC methodologies with far less number of dichotomizers.
Address Barcelona (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SUEMA
Notes OR;MILAB;HUPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ BBP2010 Serial 1363
Permanent link to this record
 

 
Author Jose Elias Yauri; M. Lagos; H. Vega-Huerta; P. de-la-Cruz; G.L.E Maquen-Niño; E. Condor-Tinoco
Title Detection of Epileptic Seizures Based-on Channel Fusion and Transformer Network in EEG Recordings Type Journal Article
Year 2023 Publication International Journal of Advanced Computer Science and Applications Abbreviated Journal IJACSA
Volume 14 Issue 5 Pages 1067-1074
Keywords (up) Epilepsy; epilepsy detection; EEG; EEG channel fusion; convolutional neural network; self-attention
Abstract According to the World Health Organization, epilepsy affects more than 50 million people in the world, and specifically, 80% of them live in developing countries. Therefore, epilepsy has become among the major public issue for many governments and deserves to be engaged. Epilepsy is characterized by uncontrollable seizures in the subject due to a sudden abnormal functionality of the brain. Recurrence of epilepsy attacks change people’s lives and interferes with their daily activities. Although epilepsy has no cure, it could be mitigated with an appropriated diagnosis and medication. Usually, epilepsy diagnosis is based on the analysis of an electroencephalogram (EEG) of the patient. However, the process of searching for seizure patterns in a multichannel EEG recording is a visual demanding and time consuming task, even for experienced neurologists. Despite the recent progress in automatic recognition of epilepsy, the multichannel nature of EEG recordings still challenges current methods. In this work, a new method to detect epilepsy in multichannel EEG recordings is proposed. First, the method uses convolutions to perform channel fusion, and next, a self-attention network extracts temporal features to classify between interictal and ictal epilepsy states. The method was validated in the public CHB-MIT dataset using the k-fold cross-validation and achieved 99.74% of specificity and 99.15% of sensitivity, surpassing current approaches.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM Approved no
Call Number Admin @ si @ Serial 3856
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera
Title A Genetic-based Subspace Analysis Method for Improving Error-Correcting Output Coding Type Journal Article
Year 2013 Publication Pattern Recognition Abbreviated Journal PR
Volume 46 Issue 10 Pages 2830-2839
Keywords (up) Error Correcting Output Codes; Evolutionary computation; Multiclass classification; Feature subspace; Ensemble classification
Abstract Two key factors affecting the performance of Error Correcting Output Codes (ECOC) in multiclass classification problems are the independence of binary classifiers and the problem-dependent coding design. In this paper, we propose an evolutionary algorithm-based approach to the design of an application-dependent codematrix in the ECOC framework. The central idea of this work is to design a three-dimensional codematrix, where the third dimension is the feature space of the problem domain. In order to do that, we consider the feature space in the design process of the codematrix with the aim of improving the independence and accuracy of binary classifiers. The proposed method takes advantage of some basic concepts of ensemble classification, such as diversity of classifiers, and also benefits from the evolutionary approach for optimizing the three-dimensional codematrix, taking into account the problem domain. We provide a set of experimental results using a set of benchmark datasets from the UCI Machine Learning Repository, as well as two real multiclass Computer Vision problems. Both sets of experiments are conducted using two different base learners: Neural Networks and Decision Trees. The results show that the proposed method increases the classification accuracy in comparison with the state-of-the-art ECOC coding techniques.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ BGE2013a Serial 2247
Permanent link to this record
 

 
Author Pau Rodriguez; Miguel Angel Bautista; Sergio Escalera; Jordi Gonzalez
Title Beyond Oneshot Encoding: lower dimensional target embedding Type Journal Article
Year 2018 Publication Image and Vision Computing Abbreviated Journal IMAVIS
Volume 75 Issue Pages 21-31
Keywords (up) Error correcting output codes; Output embeddings; Deep learning; Computer vision
Abstract Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, one-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; HuPBA; 600.098; 602.133; 602.121; 600.119 Approved no
Call Number Admin @ si @ RBE2018 Serial 3120
Permanent link to this record