Home | [121–130] << 131 132 133 134 135 136 137 138 139 140 >> [141–150] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Miquel Ferrer; Ernest Valveny; F. Serratosa | ||||
Title | Spectral Median Graphs Applied to Graphical Symbol Recognition | Type | Book Chapter | ||
Year | 2006 | Publication | 11th Iberoamerican Congress on Pattern Recognition (CIARP´06), J.P. Martinez–Trinidad et al. (Eds.), LNCS 4225: 774–783 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Cancun (Mexico) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ FVS2006b | Serial | 698 | ||
Permanent link to this record | |||||
Author | Karla Lizbeth Caballero; Joel Barajas; Oriol Pujol; Neus Salvatella; Petia Radeva | ||||
Title | In-Vivo IVUS Tissue Classification: A Comparison Between RF Signal Analysis and Reconstructed Images | Type | Book Chapter | ||
Year | 2006 | Publication | 11th Iberoamerican Congress on Pattern Recognition (CIARP´06), LNCS 4225: 137–146 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Cancun (Mexico) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ CBP2006c | Serial | 724 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Carolina Malagelada; Petia Radeva | ||||
Title | Linear Radial Patterns Characterization for Automatic Detection of Tonic Intestinal Contractions | Type | Book Chapter | ||
Year | 2006 | Publication | 11th Iberoamerican Congress on Pattern Recognition | Abbreviated Journal | |
Volume | 4225 | Issue | Pages | 178–187 | |
Keywords | |||||
Abstract | This work tackles the categorization of general linear radial patterns by means of the valleys and ridges detection and the use of descriptors of directional information, which are provided by steerable filters in different regions of the image. We successfully apply our proposal in the specific case of automatic detection of tonic contractions in video capsule endoscopy, which represent a paradigmatic example of linear radial patterns. | ||||
Address ![]() |
Cancun (Mexico) | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Verlag | Place of Publication | Berlin Heidelberg | Editor | .F. Mart ́ınez-Trinidad et al |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ VSV2006c; IAM @ iam @ VSB2006f | Serial | 728 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Carolina Malagelada; Petia Radeva | ||||
Title | A Machine Learning framework using SOMs: Applications in the Intestinal Motility Assessment | Type | Book Chapter | ||
Year | 2006 | Publication | 11th Iberoamerican Congress on Pattern Recognition | Abbreviated Journal | |
Volume | 4225 | Issue | Pages | 188–197 | |
Keywords | |||||
Abstract | Small Bowel Motility Assessment by means of Wireless Capsule Video Endoscopy constitutes a novel clinical methodology in which a capsule with a micro-camera attached to it is swallowed by the patient, emitting a RF signal which is recorded as a video of its trip throughout the gut. In order to overcome the main drawbacks associated with this technique -mainly related to the large amount of visualization time required-, our efforts have been focused on the development of a machine learning system, built up in sequential stages, which provides the specialists with the useful part of the video, rejecting those parts not valid for analysis. We successfully used Self Organized Maps in a general semi-supervised framework with the aim of tackling the different learning stages of our system. The analysis of the diverse types of images and the automatic detection of intestinal contractions is performed under the perspective of intestinal motility assessment in a clinical environment. | ||||
Address ![]() |
Cancun (Mexico) | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Verlag | Place of Publication | Berlin-Heidelberg | Editor | J.P. Martinez–Trinidad et al |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | CIARP06 | |
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ VSV2006d; IAM @ iam @ VSV2006e | Serial | 729 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | With whom do I interact with? Social interaction detection in egocentric photo-streams | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams. | ||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ADR2016a | Serial | 2791 | ||
Permanent link to this record | |||||
Author | Dena Bazazian; Raul Gomez; Anguelos Nicolaou; Lluis Gomez; Dimosthenis Karatzas; Andrew Bagdanov | ||||
Title | Improving Text Proposals for Scene Images with Fully Convolutional Networks | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Text Proposals have emerged as a class-dependent version of object proposals – efficient approaches to reduce the search space of possible text object locations in an image. Combined with strong word classifiers, text proposals currently yield top state of the art results in end-to-end scene text
recognition. In this paper we propose an improvement over the original Text Proposals algorithm of [1], combining it with Fully Convolutional Networks to improve the ranking of proposals. Results on the ICDAR RRC and the COCO-text datasets show superior performance over current state-of-the-art. |
||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRW | ||
Notes | DAG; LAMP; 600.084 | Approved | no | ||
Call Number | Admin @ si @ BGN2016 | Serial | 2823 | ||
Permanent link to this record | |||||
Author | Hugo Jair Escalante; Victor Ponce; Jun Wan; Michael A. Riegler; Baiyu Chen; Albert Clapes; Sergio Escalera; Isabelle Guyon; Xavier Baro; Pal Halvorsen; Henning Muller; Martha Larson | ||||
Title | ChaLearn Joint Contest on Multimedia Challenges Beyond Visual Analysis: An Overview | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper provides an overview of the Joint Contest on Multimedia Challenges Beyond Visual Analysis. We organized an academic competition that focused on four problems that require effective processing of multimodal information in order to be solved. Two tracks were devoted to gesture spotting and recognition from RGB-D video, two fundamental problems for human computer interaction. Another track was devoted to a second round of the first impressions challenge of which the goal was to develop methods to recognize personality traits from
short video clips. For this second round we adopted a novel collaborative-competitive (i.e., coopetition) setting. The fourth track was dedicated to the problem of video recommendation for improving user experience. The challenge was open for about 45 days, and received outstanding participation: almost 200 participants registered to the contest, and 20 teams sent predictions in the final stage. The main goals of the challenge were fulfilled: the state of the art was advanced considerably in the four tracks, with novel solutions to the proposed problems (mostly relying on deep learning). However, further research is still required. The data of the four tracks will be available to allow researchers to keep making progress in the four tracks. |
||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | HuPBA; 602.143;MV | Approved | no | ||
Call Number | Admin @ si @ EPW2016 | Serial | 2827 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Petia Radeva | ||||
Title | Simultaneous Food Localization and Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | CoRR abs/1604.07953
The development of automatic nutrition diaries, which would allow to keep track objectively of everything we eat, could enable a whole new world of possibilities for people concerned about their nutrition patterns. With this purpose, in this paper we propose the first method for simultaneous food localization and recognition. Our method is based on two main steps, which consist in, first, produce a food activation map on the input image (i.e. heat map of probabilities) for generating bounding boxes proposals and, second, recognize each of the food types or food-related objects present in each bounding box. We demonstrate that our proposal, compared to the most similar problem nowadays – object localization, is able to obtain high precision and reasonable recall levels with only a few bounding boxes. Furthermore, we show that it is applicable to both conventional and egocentric images. |
||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ BoR2016 | Serial | 2834 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams. | ||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ ADR2016d | Serial | 2835 | ||
Permanent link to this record | |||||
Author | Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Fusion of Classifier Predictions for Audio-Visual Emotion Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches. |
||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ NMN2016 | Serial | 2839 | ||
Permanent link to this record | |||||
Author | Anjan Dutta; Umapada Pal; Josep Llados | ||||
Title | Compact Correlated Features for Writer Independent Signature Verification | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper considers the offline signature verification problem which is considered to be an important research line in the field of pattern recognition. In this work we propose hybrid features that consider the local features and their global statistics in the signature image. This has been done by creating a vocabulary of histogram of oriented gradients (HOGs). We impose weights on these local features based on the height information of water reservoirs obtained from the signature. Spatial information between local features are thought to play a vital role in considering the geometry of the signatures which distinguishes the originals from the forged ones. Nevertheless, learning a condensed set of higher order neighbouring features based on visual words, e.g., doublets and triplets, continues to be a challenging problem as possible combinations of visual words grow exponentially. To avoid this explosion of size, we create a code of local pairwise features which are represented as joint descriptors. Local features are paired based on the edges of a graph representation built upon the Delaunay triangulation. We reveal the advantage of combining both type of visual codebooks (order one and pairwise) for signature verification task. This is validated through an encouraging result on two benchmark datasets viz. CEDAR and GPDS300. | ||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG; 600.097 | Approved | no | ||
Call Number | Admin @ si @ DPL2016 | Serial | 2875 | ||
Permanent link to this record | |||||
Author | Marco Bellantonio; Mohammad A. Haque; Pau Rodriguez; Kamal Nasrollahi; Taisi Telve; Sergio Escalera; Jordi Gonzalez; Thomas B. Moeslund; Pejman Rasti; Golamreza Anbarjafari | ||||
Title | Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | 10165 | Issue | Pages | ||
Keywords | |||||
Abstract | Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework that is able to exploit both spatial and temporal pain information from facial video. In order to analyze the effect of different facial resolutions, we introduce a super-resolution algorithm to generate facial video frames with different resolution setups. We investigated the performance on the publicly available UNBC-McMaster Shoulder Pain database. As a contribution, the paper provides novel and important information regarding to the performance of a hybrid deep learning framework for pain detection in facial images of different resolution. | ||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | HuPBA; ISE; 600.098; 600.119 | Approved | no | ||
Call Number | Admin @ si @ BHR2016 | Serial | 2902 | ||
Permanent link to this record | |||||
Author | Iiris Lusi; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Human Head Pose Estimation on SASE database using Random Hough Regression Forests | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition Workshops | Abbreviated Journal | |
Volume | 10165 | Issue | Pages | ||
Keywords | |||||
Abstract | In recent years head pose estimation has become an important task in face analysis scenarios. Given the availability of high resolution 3D sensors, the design of a high resolution head pose database would be beneficial for the community. In this paper, Random Hough Forests are used to estimate 3D head pose and location on a new 3D head database, SASE, which represents the baseline performance on the new data for an upcoming international head pose estimation competition. The data in SASE is acquired with a Microsoft Kinect 2 camera, including the RGB and depth information of 50 subjects with a large sample of head poses, allowing us to test methods for real-life scenarios. We briefly review the database while showing baseline head pose estimation results based on Random Hough Forests. | ||||
Address ![]() |
Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRW | ||
Notes | HuPBA; | Approved | no | ||
Call Number | Admin @ si @ LEA2016b | Serial | 2910 | ||
Permanent link to this record | |||||
Author | M. Bressan; Jordi Vitria | ||||
Title | Independent Modes of Variation in Point Distribution Models | Type | Miscellaneous | ||
Year | 2001 | Publication | In C. Arcelli, L.P. Cordella, G. Sanniti di Baja (Eds.): Visual Form 2001 4tth International Workshop on Visual Visual Form 2001 4tth International Workshop on Visual Form, IWVF4, Proceedings, LNCS 2059, Springer Verlag, 123 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Capri, Italia | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ BVi2001 | Serial | 80 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Synthetic ground truth dataset to detect shadow cast by static objects in outdoor | Type | Conference Article | ||
Year | 2012 | Publication | 1st International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications | Abbreviated Journal | |
Volume | Issue | Pages | art. 11 | ||
Keywords | |||||
Abstract | In this paper, we propose a precise synthetic ground truth dataset to study the problem of detection of the shadows cast by static objects in outdoor environments during extended periods of time (days). For our dataset, we have created a virtual scenario using a rendering software. To increase the realism of the simulated environment, we have defined the scenario in a precise geographical location. In our dataset the sun is by far the main illumination source. The sun position during the simulation time takes into consideration factors related to the geographical location, such as the latitude, longitude, elevation above sea level, and precise image capturing day and time. In our simulation the camera remains fixed. The dataset consists of seven days of simulation, from 10:00am to 5:00pm. Images are captured every 10 seconds. The shadows' ground truth is automatically computed by the rendering software. | ||||
Address ![]() |
Capri, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | ACM | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-1405-3 | Medium | ||
Area | Expedition | Conference | VIGTA | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ ISR2012a | Serial | 2037 | ||
Permanent link to this record |