|
Records |
Links |
|
Author |
Xialei Liu; Joost Van de Weijer; Andrew Bagdanov |
|
|
Title |
Exploiting Unlabeled Data in CNNs by Self-Supervised Learning to Rank |
Type |
Journal Article |
|
Year |
2019 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
41 |
Issue |
8 |
Pages |
1862-1878 |
|
|
Keywords |
Task analysis;Training;Image quality;Visualization;Uncertainty;Labeling;Neural networks;Learning from rankings;image quality assessment;crowd counting;active learning |
|
|
Abstract |
For many applications the collection of labeled data is expensive laborious. Exploitation of unlabeled data during training is thus a long pursued objective of machine learning. Self-supervised learning addresses this by positing an auxiliary task (different, but related to the supervised task) for which data is abundantly available. In this paper, we show how ranking can be used as a proxy task for some regression problems. As another contribution, we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. We apply our framework to two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning and we show that this reduces labeling effort by up to 50 percent. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.109; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
LWB2019 |
Serial |
3267 |
|
Permanent link to this record |
|
|
|
|
Author |
Javier Vazquez; Maria Vanrell; Ramon Baldrich; Francesc Tous |
|
|
Title |
Color Constancy by Category Correlation |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
21 |
Issue |
4 |
Pages |
1997-2007 |
|
|
Keywords |
|
|
|
Abstract |
Finding color representations which are stable to illuminant changes is still an open problem in computer vision. Until now most approaches have been based on physical constraints or statistical assumptions derived from the scene, while very little attention has been paid to the effects that selected illuminants have
on the final color image representation. The novelty of this work is to propose
perceptual constraints that are computed on the corrected images. We define the
category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here we choose these colors as the universal color categories related to basic linguistic terms which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ VVB2012 |
Serial |
1999 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell |
|
|
Title |
Names and Shades of Color for Intrinsic Image Estimation |
Type |
Conference Article |
|
Year |
2012 |
Publication |
25th IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
278-285 |
|
|
Keywords |
|
|
|
Abstract |
In the last years, intrinsic image decomposition has gained attention. Most of the state-of-the-art methods are based on the assumption that reflectance changes come along with strong image edges. Recently, user intervention in the recovery problem has proved to be a remarkable source of improvement. In this paper, we propose a novel approach that aims to overcome the shortcomings of pure edge-based methods by introducing strong surface descriptors, such as the color-name descriptor which introduces high-level considerations resembling top-down intervention. We also use a second surface descriptor, termed color-shade, which allows us to include physical considerations derived from the image formation model capturing gradual color surface variations. Both color cues are combined by means of a Markov Random Field. The method is quantitatively tested on the MIT ground truth dataset using different error metrics, achieving state-of-the-art performance. |
|
|
Address |
Providence, Rhode Island |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE Xplore |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4673-1226-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ SPB2012 |
Serial |
2026 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell; Antonio Lopez |
|
|
Title |
Color Attributes for Object Detection |
Type |
Conference Article |
|
Year |
2012 |
Publication |
25th IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3306-3313 |
|
|
Keywords |
pedestrian detection |
|
|
Abstract |
State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods. |
|
|
Address |
Providence; Rhode Island; USA; |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE Xplore |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1063-6919 |
ISBN |
978-1-4673-1226-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
ADAS; CIC; |
Approved |
no |
|
|
Call Number |
Admin @ si @ KRW2012 |
Serial |
1935 |
|
Permanent link to this record |
|
|
|
|
Author |
Arjan Gijsenij; Theo Gevers; Joost Van de Weijer |
|
|
Title |
Improving Color Constancy by Photometric Edge Weighting |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Transaction on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
34 |
Issue |
5 |
Pages |
918-929 |
|
|
Keywords |
|
|
|
Abstract |
: Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as material, shadow and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g. material, shadow-geometry and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Grey-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Grey-Edge algorithm based on highlights reduces the median angular error with approximately $25\%$. In an uncontrolled environment, improvements in angular error up to $11\%$ are obtained with respect to regular edge-based color constancy. |
|
|
Address |
Los Alamitos; CA; USA; |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0162-8828 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ GGW2012 |
Serial |
1850 |
|
Permanent link to this record |
|
|
|
|
Author |
Michael Holte; Bhaskar Chakraborty; Jordi Gonzalez; Thomas B. Moeslund |
|
|
Title |
A Local 3D Motion Descriptor for Multi-View Human Action Recognition from 4D Spatio-Temporal Interest Points |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Journal of Selected Topics in Signal Processing |
Abbreviated Journal |
J-STSP |
|
|
Volume |
6 |
Issue |
5 |
Pages |
553-565 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we address the problem of human action recognition in reconstructed 3-D data acquired by multi-camera systems. We contribute to this field by introducing a novel 3-D action recognition approach based on detection of 4-D (3-D space $+$ time) spatio-temporal interest points (STIPs) and local description of 3-D motion features. STIPs are detected in multi-view images and extended to 4-D using 3-D reconstructions of the actors and pixel-to-vertex correspondences of the multi-camera setup. Local 3-D motion descriptors, histogram of optical 3-D flow (HOF3D), are extracted from estimated 3-D optical flow in the neighborhood of each 4-D STIP and made view-invariant. The local HOF3D descriptors are divided using 3-D spatial pyramids to capture and improve the discrimination between arm- and leg-based actions. Based on these pyramids of HOF3D descriptors we build a bag-of-words (BoW) vocabulary of human actions, which is compressed and classified using agglomerative information bottleneck (AIB) and support vector machines (SVMs), respectively. Experiments on the publicly available i3DPost and IXMAS datasets show promising state-of-the-art results and validate the performance and view-invariance of the approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1932-4553 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ HCG2012 |
Serial |
1994 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Sergio Escalera |
|
|
Title |
Spatial codification of label predictions in Multi-scale Stacked Sequential Learning: A case study on multi-class medical volume segmentation |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IET Computer Vision |
Abbreviated Journal |
IETCV |
|
|
Volume |
9 |
Issue |
3 |
Pages |
439 - 446 |
|
|
Keywords |
|
|
|
Abstract |
In this study, the authors propose the spatial codification of label predictions within the multi-scale stacked sequential learning (MSSL) framework, a successful learning scheme to deal with non-independent identically distributed data entries. After providing a motivation for this objective, they describe its theoretical framework based on the introduction of the blurred shape model as a smart descriptor to codify the spatial distribution of the predicted labels and define the new extended feature set for the second stacked classifier. They then particularise this scheme to be applied in volume segmentation applications. Finally, they test the implementation of the proposed framework in two medical volume segmentation datasets, obtaining significant performance improvements (with a 95% of confidence) in comparison to standard Adaboost classifier and classical MSSL approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1751-9632 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SaE2015 |
Serial |
2551 |
|
Permanent link to this record |
|
|
|
|
Author |
Adela Barbulescu; Wenjuan Gong; Jordi Gonzalez; Thomas B. Moeslund; Xavier Roca |
|
|
Title |
3D Human Pose Estimation Using 2D Body Part Detectors |
Type |
Conference Article |
|
Year |
2012 |
Publication |
21st International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2484 - 2487 |
|
|
Keywords |
|
|
|
Abstract |
Automatic 3D reconstruction of human poses from monocular images is a challenging and popular topic in the computer vision community, which provides a wide range of applications in multiple areas. Solutions for 3D pose estimation involve various learning approaches, such as support vector machines and Gaussian processes, but many encounter difficulties in cluttered scenarios and require additional input data, such as silhouettes, or controlled camera settings. We present a framework that is capable of estimating the 3D pose of a person from single images or monocular image sequences without requiring background information and which is robust to camera variations. The framework models the non-linearity present in human pose estimation as it benefits from flexible learning approaches, including a highly customizable 2D detector. Results on the HumanEva benchmark show how they perform and influence the quality of the 3D pose estimates. |
|
|
Address |
Tsubuka, Japan |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
978-1-4673-2216-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGG2012 |
Serial |
2172 |
|
Permanent link to this record |
|
|
|
|
Author |
Mikhail Mozerov |
|
|
Title |
Constrained Optical Flow Estimation as a Matching Problem |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
22 |
Issue |
5 |
Pages |
2044-2055 |
|
|
Keywords |
|
|
|
Abstract |
In general, discretization in the motion vector domain yields an intractable number of labels. In this paper we propose an approach that can reduce general optical flow to the constrained matching problem by pre-estimating a 2D disparity labeling map of the desired discrete motion vector function. One of the goals of the proposed paper is estimating coarse distribution of motion vectors and then utilizing this distribution as global constraints for discrete optical flow estimation. This pre-estimation is done with a simple frame-to-frame correlation technique also known as the digital symmetric-phase-only-filter (SPOF). We discover a strong correlation between the output of the SPOF and the motion vector distribution of the related optical flow. The two step matching paradigm for optical flow estimation is applied: pixel accuracy (integer flow), and subpixel accuracy estimation. The matching problem is solved by global optimization. Experiments on the Middlebury optical flow datasets confirm our intuitive assumptions about strong correlation between motion vector distribution of optical flow and maximal peaks of SPOF outputs. The overall performance of the proposed method is promising and achieves state-of-the-art results on the Middlebury benchmark. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ Moz2013 |
Serial |
2191 |
|
Permanent link to this record |
|
|
|
|
Author |
German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez |
|
|
Title |
Vision-based Offline-Online Perception Paradigm for Autonomous Driving |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
231 - 238 |
|
|
Keywords |
Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation |
|
|
Abstract |
Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community. |
|
|
Address |
Hawaii; January 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
ACDC |
Expedition |
|
Conference |
WACV |
|
|
Notes |
ADAS; 600.076 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RRG2015 |
Serial |
2499 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez |
|
|
Title |
Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE Intelligent Vehicles Symposium IV2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
356-361 |
|
|
Keywords |
Pedestrian Detection |
|
|
Abstract |
Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. |
|
|
Address |
Seoul; Corea; June 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
ACDC |
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS; 600.076; 600.057; 600.054 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ GVX2015 |
Serial |
2625 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Jordi Gonzalez; Xavier Baro; Pablo Pardo; Junior Fabian; Marc Oliu; Hugo Jair Escalante; Ivan Huerta; Isabelle Guyon |
|
|
Title |
ChaLearn Looking at People 2015 new competitions: Age Estimation and Cultural Event Recognition |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE International Joint Conference on Neural Networks IJCNN2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1-8 |
|
|
Keywords |
|
|
|
Abstract |
Following previous series on Looking at People (LAP) challenges [1], [2], [3], in 2015 ChaLearn runs two new competitions within the field of Looking at People: age and cultural event recognition in still images. We propose thefirst crowdsourcing application to collect and label data about apparent
age of people instead of the real age. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes both challenges and data, providing some initial baselines. The results of the first round of the competition were presented at ChaLearn LAP 2015 IJCNN special session on computer vision and robotics http://www.dtic.ua.es/∼jgarcia/IJCNN2015.
Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/. |
|
|
Address |
Killarney; Ireland; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HuPBA; ISE; 600.063; 600.078;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ EGB2015 |
Serial |
2591 |
|
Permanent link to this record |
|
|
|
|
Author |
Gerard Canal; Cecilio Angulo; Sergio Escalera |
|
|
Title |
Gesture based Human Multi-Robot interaction |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE International Joint Conference on Neural Networks IJCNN2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The emergence of robot applications for nontechnical users implies designing new ways of interaction between robotic platforms and users. The main goal of this work is the development of a gestural interface to interact with robots
in a similar way as humans do, allowing the user to provide information of the task with non-verbal communication. The gesture recognition application has been implemented using the Microsoft’s KinectTM v2 sensor. Hence, a real-time algorithm based on skeletal features is described to deal with both, static
gestures and dynamic ones, being the latter recognized using a weighted Dynamic Time Warping method. The gesture recognition application has been implemented in a multi-robot case.
A NAO humanoid robot is in charge of interacting with the users and respond to the visual signals they produce. Moreover, a wheeled Wifibot robot carries both the sensor and the NAO robot, easing navigation when necessary. A broad set of user tests have been carried out demonstrating that the system is, indeed, a
natural approach to human robot interaction, with a fast response and easy to use, showing high gesture recognition rates. |
|
|
Address |
Killarney; Ireland; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
CAE2015a |
Serial |
2651 |
|
Permanent link to this record |
|
|
|
|
Author |
Hugo Jair Escalante; Jose Martinez; Sergio Escalera; Victor Ponce; Xavier Baro |
|
|
Title |
Improving Bag of Visual Words Representations with Genetic Programming |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE International Joint Conference on Neural Networks IJCNN2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The bag of visual words is a well established representation in diverse computer vision problems. Taking inspiration from the fields of text mining and retrieval, this representation has proved to be very effective in a large number of domains.
In most cases, a standard term-frequency weighting scheme is considered for representing images and videos in computer vision. This is somewhat surprising, as there are many alternative ways of generating bag of words representations within the text processing community. This paper explores the use of alternative weighting schemes for landmark tasks in computer vision: image
categorization and gesture recognition. We study the suitability of using well-known supervised and unsupervised weighting schemes for such tasks. More importantly, we devise a genetic program that learns new ways of representing images and videos under the bag of visual words representation. The proposed method learns to combine term-weighting primitives trying to maximize the classification performance. Experimental results are reported in standard image and video data sets showing the effectiveness of the proposed evolutionary algorithm. |
|
|
Address |
Killarney; Ireland; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ EME2015 |
Serial |
2603 |
|
Permanent link to this record |
|
|
|
|
Author |
Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Alexander Statnikov; Evelyne Viegas |
|
|
Title |
Design of the 2015 ChaLearn AutoML Challenge |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE International Joint Conference on Neural Networks IJCNN2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
ChaLearn is organizing for IJCNN 2015 an Automatic Machine Learning challenge (AutoML) to solve classification and regression problems from given feature representations, without any human intervention. This is a challenge with code
submission: the code submitted can be executed automatically on the challenge servers to train and test learning machines on new datasets. However, there is no obligation to submit code. Half of the prizes can be won by just submitting prediction results.
There are six rounds (Prep, Novice, Intermediate, Advanced, Expert, and Master) in which datasets of progressive difficulty are introduced (5 per round). There is no requirement to participate in previous rounds to enter a new round. The rounds alternate AutoML phases in which submitted code is “blind tested” on
datasets the participants have never seen before, and Tweakathon phases giving time (' 1 month) to the participants to improve their methods by tweaking their code on those datasets. This challenge will push the state-of-the-art in fully automatic machine learning on a wide range of problems taken from real world
applications. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML |
|
|
Address |
Killarney; Ireland; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IJCNN |
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GBC2015a |
Serial |
2604 |
|
Permanent link to this record |