|
Records |
Links |
|
Author |
Laura Igual; Joan Carles Soliva; Sergio Escalera; Roger Gimeno; Oscar Vilarroya; Petia Radeva |


|
|
Title |
Automatic Brain Caudate Nuclei Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder |
Type |
Journal Article |
|
Year |
2012 |
Publication  |
Computerized Medical Imaging and Graphics |
Abbreviated Journal |
CMIG |
|
|
Volume |
36 |
Issue |
8 |
Pages |
591-600 |
|
|
Keywords |
Automatic caudate segmentation; Attention-Deficit/Hyperactivity Disorder; Diagnostic test; Machine learning; Decision stumps; Dissociated dipoles |
|
|
Abstract |
We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; HuPBA; MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISE2012 |
Serial |
2143 |
|
Permanent link to this record |
|
|
|
|
Author |
Gerard Canal; Sergio Escalera; Cecilio Angulo |


|
|
Title |
A Real-time Human-Robot Interaction system based on gestures for assistive scenarios |
Type |
Journal Article |
|
Year |
2016 |
Publication  |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
149 |
Issue |
|
Pages |
65-77 |
|
|
Keywords |
Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation |
|
|
Abstract |
Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ CEA2016 |
Serial |
2768 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera; Huamin Ren; Thomas B. Moeslund; Elham Etemad |

|
|
Title |
Locality Regularized Group Sparse Coding for Action Recognition |
Type |
Journal Article |
|
Year |
2017 |
Publication  |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
158 |
Issue |
|
Pages |
106-114 |
|
|
Keywords |
Bag of words; Feature encoding; Locality constrained coding; Group sparse coding; Alternating direction method of multipliers; Action recognition |
|
|
Abstract |
Bag of visual words (BoVW) models are widely utilized in image/ video representation and recognition. The cornerstone of these models is the encoding stage, in which local features are decomposed over a codebook in order to obtain a representation of features. In this paper, we propose a new encoding algorithm by jointly encoding the set of local descriptors of each sample and considering the locality structure of descriptors. The proposed method takes advantages of locality coding such as its stability and robustness to noise in descriptors, as well as the strengths of the group coding strategy by taking into account the potential relation among descriptors of a sample. To efficiently implement our proposed method, we consider the Alternating Direction Method of Multipliers (ADMM) framework, which results in quadratic complexity in the problem size. The method is employed for a challenging classification problem: action recognition by depth cameras. Experimental results demonstrate the outperformance of our methodology compared to the state-of-the-art on the considered datasets. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA; no proj;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGE2017 |
Serial |
3014 |
|
Permanent link to this record |
|
|
|
|
Author |
Pichao Wang; Wanqing Li; Philip Ogunbona; Jun Wan; Sergio Escalera |


|
|
Title |
RGB-D-based Human Motion Recognition with Deep Learning: A Survey |
Type |
Journal Article |
|
Year |
2018 |
Publication  |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
|
|
Volume |
171 |
Issue |
|
Pages |
118-139 |
|
|
Keywords |
Human motion recognition; RGB-D data; Deep learning; Survey |
|
|
Abstract |
Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ WLO2018 |
Serial |
3123 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo |

|
|
Title |
Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes |
Type |
Journal Article |
|
Year |
2011 |
Publication  |
Computer Graphics Forum |
Abbreviated Journal |
CGF |
|
|
Volume |
30 |
Issue |
7 |
Pages |
2107-2115 |
|
|
Keywords |
|
|
|
Abstract |
IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ EPA2011 |
Serial |
1881 |
|
Permanent link to this record |