toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jorge Bernal; F. Javier Sanchez; Fernando Vilariño edit   pdf
url  isbn
openurl 
  Title Depth of Valleys Accumulation Algorithm for Object Detection Type Conference Article
  Year 2011 Publication 14th Congrès Català en Intel·ligencia Artificial Abbreviated Journal  
  Volume 1 Issue 1 Pages 71-80  
  Keywords Object Recognition, Object Region Identification, Image Analysis, Image Processing  
  Abstract This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.  
  Address Lleida  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-841-0 Medium  
  Area 800 Expedition Conference (up) CCIA  
  Notes MV;SIAI Approved no  
  Call Number IAM @ iam @ BSV2011b Serial 1699  
Permanent link to this record
 

 
Author Xavier Perez Sala; Cecilio Angulo; Sergio Escalera edit  url
doi  isbn
openurl 
  Title Biologically Inspired Turn Control in Robot Navigation Type Conference Article
  Year 2011 Publication 14th Congrès Català en Intel·ligencia Artificial Abbreviated Journal  
  Volume Issue Pages 187-196  
  Keywords  
  Abstract An exportable and robust system for turn control using only camera images is proposed for path execution in robot navigation. Robot motion information is extracted in the form of optical flow from SURF robust descriptors of consecutive frames in the image sequence. This information is used to compute the instantaneous rotation angle. Finally, control loop is closed correcting robot displacements when it is requested for a turn command. The proposed system has been successfully tested on the four-legged Sony Aibo robot.  
  Address Lleida  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60750-841-0 Medium  
  Area Expedition Conference (up) CCIA  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ PAE2011a Serial 1753  
Permanent link to this record
 

 
Author Alvaro Cepero; Albert Clapes; Sergio Escalera edit   pdf
openurl 
  Title Quantitative analysis of non-verbal communication for competence analysis Type Conference Article
  Year 2013 Publication 16th Catalan Conference on Artificial Intelligence Abbreviated Journal  
  Volume 256 Issue Pages 105-114  
  Keywords  
  Abstract  
  Address Vic; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes HUPBA;MILAB Approved no  
  Call Number Admin @ si @ CCE2013 Serial 2324  
Permanent link to this record
 

 
Author Vitaliy Konovalov; Albert Clapes; Sergio Escalera edit   pdf
openurl 
  Title Automatic Hand Detection in RGB-Depth Data Sequences Type Conference Article
  Year 2013 Publication 16th Catalan Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 91-100  
  Keywords  
  Abstract Detecting hands in multi-modal RGB-Depth visual data has become a challenging Computer Vision problem with several applications of interest. This task involves dealing with changes in illumination, viewpoint variations, the articulated nature of the human body, the high flexibility of the wrist articulation, and the deformability of the hand itself. In this work, we propose an accurate and efficient automatic hand detection scheme to be applied in Human-Computer Interaction (HCI) applications in which the user is seated at the desk and, thus, only the upper body is visible. Our main hypothesis is that hand landmarks remain at a nearly constant geodesic distance from an automatically located anatomical reference point.
In a given frame, the human body is segmented first in the depth image. Then, a
graph representation of the body is built in which the geodesic paths are computed from the reference point. The dense optical flow vectors on the corresponding RGB image are used to reduce ambiguities of the geodesic paths’ connectivity, allowing to eliminate false edges interconnecting different body parts. Finally, we are able to detect the position of both hands based on invariant geodesic distances and optical flow within the body region, without involving costly learning procedures.
 
  Address Vic; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ KCE2013 Serial 2323  
Permanent link to this record
 

 
Author Maedeh Aghaei; Petia Radeva edit  doi
isbn  openurl
  Title Bag-of-Tracklets for Person Tracking in Life-Logging Data Type Conference Article
  Year 2014 Publication 17th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 269 Issue Pages 35-44  
  Keywords  
  Abstract By increasing popularity of wearable cameras, life-logging data analysis is becoming more and more important and useful to derive significant events out of this substantial collection of images. In this study, we introduce a new tracking method applied to visual life-logging, called bag-of-tracklets, which is based on detecting, localizing and tracking of people. Given the low spatial and temporal resolution of the image data, our model generates and groups tracklets in a unsupervised framework and extracts image sequences of person appearance according to a similarity score of the bag-of-tracklets. The model output is a meaningful sequence of events expressing human appearance and tracking them in life-logging data. The achieved results prove the robustness of our model in terms of efficiency and accuracy despite the low spatial and temporal resolution of the data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-61499-451-0 Medium  
  Area Expedition Conference (up) CCIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ AgR2015 Serial 2607  
Permanent link to this record
 

 
Author Agata Lapedriza; David Masip; David Sanchez edit  doi
isbn  openurl
  Title Emotions Classification using Facial Action Units Recognition Type Conference Article
  Year 2014 Publication 17th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 269 Issue Pages 55-64  
  Keywords  
  Abstract In this work we build a system for automatic emotion classification from image sequences. We analyze subtle changes in facial expressions by detecting a subset of 12 representative facial action units (AUs). Then, we classify emotions based on the output of these AUs classifiers, i.e. the presence/absence of AUs. We base the AUs classification upon a set of spatio-temporal geometric and appearance features for facial representation, fusing them within the emotion classifier. A decision tree is trained for emotion classifying, making the resulting model easy to interpret by capturing the combination of AUs activation that lead to a particular emotion. For Cohn-Kanade database, the proposed system classifies 7 emotions with a mean accuracy of near 90%, attaining a similar recognition accuracy in comparison with non-interpretable models that are not based in AUs detection.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-61499-451-0 Medium  
  Area Expedition Conference (up) CCIA  
  Notes OR;MV Approved no  
  Call Number Admin @ si @ LMS2014 Serial 2622  
Permanent link to this record
 

 
Author G. de Oliveira; Mariella Dimiccoli; Petia Radeva edit  openurl
  Title Egocentric Image Retrieval With Deep Convolutional Neural Networks Type Conference Article
  Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 71-76  
  Keywords  
  Abstract  
  Address Barcelona; Spain; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ODR2016 Serial 2790  
Permanent link to this record
 

 
Author Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi edit   pdf
openurl 
  Title Automated Identification and Tracking of Nephrops norvegicus (L.) Using Infrared and Monochromatic Blue Light Type Conference Article
  Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords computer vision; video analysis; object recognition; tracking; behaviour; social; decapod; Nephrops norvegicus  
  Abstract Automated video and image analysis can be a very efficient tool to analyze
animal behavior based on sociality, especially in hard access environments
for researchers. The understanding of this social behavior can play a key role in the sustainable design of capture policies of many species. This paper proposes the use of computer vision algorithms to identify and track a specific specie, the Norway lobster, Nephrops norvegicus, a burrowing decapod with relevant commercial value which is captured by trawling. These animals can only be captured when are engaged in seabed excursions, which are strongly related with their social behavior.
This emergent behavior is modulated by the day-night cycle, but their social
interactions remain unknown to the scientific community. The paper introduces an identification scheme made of four distinguishable black and white tags (geometric shapes). The project has recorded 15-day experiments in laboratory pools, under monochromatic blue light (472 nm.) and darkness conditions (recorded using Infra Red light). Using this massive image set, we propose a comparative of state-ofthe-art computer vision algorithms to distinguish and track the different animals’ movements. We evaluate the robustness to the high noise presence in the infrared video signals and free out-of-plane rotations due to animal movement. The experiments show promising accuracies under a cross-validation protocol, being adaptable to the automation and analysis of large scale data. In a second contribution, we created an extensive dataset of shapes (46027 different shapes) from four daily experimental video recordings, which will be available to the community.
 
  Address Barcelona; Spain; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes OR;MV; Approved no  
  Call Number Admin @ si @ GMS2016 Serial 2816  
Permanent link to this record
 

 
Author Petia Radeva edit  openurl
  Title Can Deep Learning and Egocentric Vision for Visual Lifelogging Help Us Eat Better? Type Conference Article
  Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume 4 Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ Rad2016 Serial 2832  
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Petia Radeva; Domenec Puig edit   pdf
doi  openurl
  Title CuisineNet: Food Attributes Classification using Multi-scale Convolution Network Type Conference Article
  Year 2018 Publication 21st International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 365-372  
  Keywords  
  Abstract Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.  
  Address Roses; catalonia; October 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ SJR2018 Serial 3113  
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Syeda Furruka Banu; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Sylvie Chambon; Petia Radeva; Domenec Puig edit  doi
openurl 
  Title Food Places Classification in Egocentric Images Using Siamese Neural Networks Type Conference Article
  Year 2019 Publication 22nd International Conference of the Catalan Association of Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 145-151  
  Keywords  
  Abstract Wearable cameras are become more popular in recent years for capturing the unscripted moments of the first-person that help to analyze the users lifestyle. In this work, we aim to recognize the places related to food in egocentric images during a day to identify the daily food patterns of the first-person. Thus, this system can assist to improve their eating behavior to protect users against food-related diseases. In this paper, we use Siamese Neural Networks to learn the similarity between images from corresponding inputs for one-shot food places classification. We tested our proposed method with ‘MiniEgoFoodPlaces’ with 15 food related places. The proposed Siamese Neural Networks model with MobileNet achieved an overall classification accuracy of 76.74% and 77.53% on the validation and test sets of the “MiniEgoFoodPlaces” dataset, respectively outperforming with the base models, such as ResNet50, InceptionV3, and InceptionResNetV2.  
  Address Illes Balears; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ SBR2019 Serial 3368  
Permanent link to this record
 

 
Author Jose Elias Yauri; Aura Hernandez-Sabate; Pau Folch; Debora Gil edit  doi
openurl 
  Title Mental Workload Detection Based on EEG Analysis Type Conference Article
  Year 2021 Publication Artificial Intelligent Research and Development. Proceedings 23rd International Conference of the Catalan Association for Artificial Intelligence. Abbreviated Journal  
  Volume 339 Issue Pages 268-277  
  Keywords Cognitive states; Mental workload; EEG analysis; Neural Networks.  
  Abstract The study of mental workload becomes essential for human work efficiency, health conditions and to avoid accidents, since workload compromises both performance and awareness. Although workload has been widely studied using several physiological measures, minimising the sensor network as much as possible remains both a challenge and a requirement.
Electroencephalogram (EEG) signals have shown a high correlation to specific cognitive and mental states like workload. However, there is not enough evidence in the literature to validate how well models generalize in case of new subjects performing tasks of a workload similar to the ones included during model’s training.
In this paper we propose a binary neural network to classify EEG features across different mental workloads. Two workloads, low and medium, are induced using two variants of the N-Back Test. The proposed model was validated in a dataset collected from 16 subjects and shown a high level of generalization capability: model reported an average recall of 81.81% in a leave-one-out subject evaluation.
 
  Address Virtual; October 20-22 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) CCIA  
  Notes IAM; 600.139; 600.118; 600.145 Approved no  
  Call Number Admin @ si @ Serial 3723  
Permanent link to this record
 

 
Author David Rotger; Petia Radeva; E Fernandez-Nofrerias; J. Mauri edit  isbn
openurl 
  Title Blood Detection In IVUS Longitudinal Cuts Using AdaBoost With a Novel Feature Stability Criterion Type Conference Article
  Year 2007 Publication Artificial Intelligence Research and Development. Proceedings of the 10th International Conference of the ACIA Abbreviated Journal  
  Volume 163 Issue Pages 197–204  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-58603-798-7 Medium  
  Area Expedition Conference (up) CCIA’07  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ RRF2007a Serial 831  
Permanent link to this record
 

 
Author Alex Goldhoorn; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  isbn
openurl 
  Title Using the Average Landmark Vector Method for Robot Homing Type Conference Article
  Year 2007 Publication Artificial Intelligence Research and Development, Proceedings of the 10th International Conference of the ACIA Abbreviated Journal  
  Volume 163 Issue Pages 331–338  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978–1–58603–798–7 Medium  
  Area Expedition Conference (up) CCIA’07  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ GRL2007 Serial 899  
Permanent link to this record
 

 
Author Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich edit   pdf
url  isbn
openurl 
  Title Perception Based Representations for Computational Colour Type Conference Article
  Year 2011 Publication 3rd International Workshop on Computational Color Imaging Abbreviated Journal  
  Volume 6626 Issue Pages 16-30  
  Keywords colour perception, induction, naming, psychophysical data, saliency, segmentation  
  Abstract The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.  
  Address Milan, Italy  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor Raimondo Schettini, Shoji Tominaga, Alain Trémeau  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-642-20403-6 Medium  
  Area Expedition Conference (up) CCIW  
  Notes CIC Approved no  
  Call Number Admin @ si @ VMB2011 Serial 1733  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: