|
Records |
Links |
|
Author |
Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Mehreen Saeed; Alexander Statnikov; Evelyne Viegas |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
AutoML Challenge 2015: Design and First Results |
Type |
Conference Article |
|
Year |
2015 |
Publication |
32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1-8 |
|
|
Keywords |
AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning |
|
|
Abstract |
ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classication and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML. |
|
|
Address |
Lille; France; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICML |
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GBC2015c |
Serial |
2656 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Hugo Jair Escalante; Sergio Escalera; Xavier Baro |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Gesture and Action Recognition by Evolved Dynamic Subgestures |
Type |
Conference Article |
|
Year |
2015 |
Publication |
26th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
129.1-129.13 |
|
|
Keywords |
|
|
|
Abstract |
This paper introduces a framework for gesture and action recognition based on the evolution of temporal gesture primitives, or subgestures. Our work is inspired on the principle of producing genetic variations within a population of gesture subsequences, with the goal of obtaining a set of gesture units that enhance the generalization capability of standard gesture recognition approaches. In our context, gesture primitives are evolved over time using dynamic programming and generative models in order to recognize complex actions. In few generations, the proposed subgesture-based representation
of actions and gestures outperforms the state of the art results on the MSRDaily3D and MSRAction3D datasets. |
|
|
Address |
Swansea; uk; September 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes |
HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PEE2015 |
Serial |
2657 |
|
Permanent link to this record |
|
|
|
|
Author |
Huamin Ren; Weifeng Liu; Soren Ingvor Olsen; Sergio Escalera; Thomas B. Moeslund |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Unsupervised Behavior-Specific Dictionary Learning for Abnormal Event Detection |
Type |
Conference Article |
|
Year |
2015 |
Publication |
26th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Swansea; uk; September 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ RLO2015 |
Serial |
2658 |
|
Permanent link to this record |
|
|
|
|
Author |
Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Real-Time Face Detection and Tracking Utilising OpenMP and ROS |
Type |
Conference Article |
|
Year |
2015 |
Publication |
3rd Asia-Pacific Conference on Computer Aided System Engineering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
179 - 184 |
|
|
Keywords |
RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP |
|
|
Abstract |
The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction. |
|
|
Address |
Quito; Ecuador; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
APCASE |
|
|
Notes |
NEUROBIT |
Approved |
no |
|
|
Call Number |
Admin @ si @ TAG2015 |
Serial |
2659 |
|
Permanent link to this record |
|
|
|
|
Author |
Arash Akbarinia; C. Alejandro Parraga |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Biologically Plausible Colour Naming Model |
Type |
Conference Article |
|
Year |
2015 |
Publication |
European Conference on Visual Perception ECVP2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Poster |
|
|
Address |
Liverpool; UK; August 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECVP |
|
|
Notes |
NEUROBIT; 600.068 |
Approved |
no |
|
|
Call Number |
Admin @ si @ AkP2015 |
Serial |
2660 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Scene Representations for Autonomous Driving: an approach based on polygonal primitives |
Type |
Conference Article |
|
Year |
2015 |
Publication |
2nd Iberian Robotics Conference ROBOT2015 |
Abbreviated Journal |
|
|
|
Volume |
417 |
Issue |
|
Pages |
503-515 |
|
|
Keywords |
Scene reconstruction; Point cloud; Autonomous vehicles |
|
|
Abstract |
In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques. |
|
|
Address |
Lisboa; Portugal; November 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ROBOT |
|
|
Notes |
ADAS; 600.076; 600.086 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSS2015a |
Serial |
2662 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel Sappa; A. Tom |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains |
Type |
Conference Article |
|
Year |
2015 |
Publication |
International Conference on Intelligent Robots and Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2488 - 2495 |
|
|
Keywords |
Visual Learning; Computer Vision; Autonomous Agents |
|
|
Abstract |
In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks. |
|
|
Address |
Hamburg; Germany; October 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IROS |
|
|
Notes |
ADAS; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OSL2015 |
Serial |
2664 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Rouhani; Angel Sappa |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
The Richer Representation the Better Registration |
Type |
Journal Article |
|
Year |
2013 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
22 |
Issue |
12 |
Pages |
5036-5049 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, the registration problem is formulated as a point to model distance minimization. Unlike most of the existing works, which are based on minimizing a point-wise correspondence term, this formulation avoids the correspondence search that is time-consuming. In the first stage, the target set is described through an implicit function by employing a linear least squares fitting. This function can be either an implicit polynomial or an implicit B-spline from a coarse to fine representation. In the second stage, we show how the obtained implicit representation is used as an interface to convert point-to-point registration into point-to-implicit problem. Furthermore, we show that this registration distance is smooth and can be minimized through the Levengberg-Marquardt algorithm. All the formulations presented for both stages are compact and easy to implement. In addition, we show that our registration method can be handled using any implicit representation though some are coarse and others provide finer representations; hence, a tradeoff between speed and accuracy can be set by employing the right implicit function. Experimental results and comparisons in 2D and 3D show the robustness and the speed of convergence of the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RoS2013 |
Serial |
2665 |
|
Permanent link to this record |
|
|
|
|
Author |
R.A.Bendezu; E.Barba; E.Burri; D.Cisternas; Carolina Malagelada; Santiago Segui; Anna Accarino; S.Quiroga; E.Monclus; I.Navazo |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Intestinal gas content and distribution in health and in patients with functional gut symptoms |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurogastroenterology & Motility |
Abbreviated Journal |
NEUMOT |
|
|
Volume |
27 |
Issue |
9 |
Pages |
1249-1257 |
|
|
Keywords |
|
|
|
Abstract |
BACKGROUND:
The precise relation of intestinal gas to symptoms, particularly abdominal bloating and distension remains incompletely elucidated. Our aim was to define the normal values of intestinal gas volume and distribution and to identify abnormalities in relation to functional-type symptoms.
METHODS:
Abdominal computed tomography scans were evaluated in healthy subjects (n = 37) and in patients in three conditions: basal (when they were feeling well; n = 88), during an episode of abdominal distension (n = 82) and after a challenge diet (n = 24). Intestinal gas content and distribution were measured by an original analysis program. Identification of patients outside the normal range was performed by machine learning techniques (one-class classifier). Results are expressed as median (IQR) or mean ± SE, as appropriate.
KEY RESULTS:
In healthy subjects the gut contained 95 (71, 141) mL gas distributed along the entire lumen. No differences were detected between patients studied under asymptomatic basal conditions and healthy subjects. However, either during a spontaneous bloating episode or once challenged with a flatulogenic diet, luminal gas was found to be increased and/or abnormally distributed in about one-fourth of the patients. These patients detected outside the normal range by the classifier exhibited a significantly greater number of abnormal features than those within the normal range (3.7 ± 0.4 vs 0.4 ± 0.1; p < 0.001).
CONCLUSIONS & INFERENCES:
The analysis of a large cohort of subjects using original techniques provides unique and heretofore unavailable information on the volume and distribution of intestinal gas in normal conditions and in relation to functional gastrointestinal symptoms. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BBB2015 |
Serial |
2667 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Jiaolong Xu; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Antonio Lopez |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Recognizing Actions through Action-specific Person Detection |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
24 |
Issue |
11 |
Pages |
4422-4432 |
|
|
Keywords |
|
|
|
Abstract |
Action recognition in still images is a challenging problem in computer vision. To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. The assumption is that, in practice, a general person detector will provide candidate bounding boxes for action recognition. In this paper, we argue that this paradigm is suboptimal and that action class labels should already be considered during the detection stage. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of action-specific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification. To the best of our knowledge, we are the first to investigate transfer learning for the task of action-specific person detection in still images. We perform extensive experiments on two benchmark data sets: 1) Stanford-40 and 2) PASCAL VOC 2012. For the action detection task (i.e., both person localization and classification of the action performed), our approach outperforms methods based on general person detection by 5.7% mean average precision (MAP) on Stanford-40 and 2.1% MAP on PASCAL VOC 2012. Our approach also significantly outperforms the state of the art with a MAP of 45.4% on Stanford-40 and 31.4% on PASCAL VOC 2012. We also evaluate our action detection approach for the task of action classification (i.e., recognizing actions without localizing them). For this task, our approach, without using any ground-truth person localization at test tim- , outperforms on both data sets state-of-the-art methods, which do use person locations. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; LAMP; 600.076; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KXR2015 |
Serial |
2668 |
|
Permanent link to this record |
|
|
|
|
Author |
Adria Ruiz; Joost Van de Weijer; Xavier Binefa |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
From emotions to action units with hidden and semi-hidden-task learning |
Type |
Conference Article |
|
Year |
2015 |
Publication |
16th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3703-3711 |
|
|
Keywords |
|
|
|
Abstract |
Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units)for which samples are not available but, in contrast, training data is easier to obtain from a set of related VisibleTasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training. |
|
|
Address |
Santiago de Chile; Chile; December 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICCV |
|
|
Notes |
LAMP; 600.068; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RWB2015 |
Serial |
2671 |
|
Permanent link to this record |
|
|
|
|
Author |
Lluis Garrido; M.Guerrieri; Laura Igual |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Image Segmentation with Cage Active Contours |
Type |
Journal Article |
|
Year |
2015 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
TIP |
|
|
Volume |
24 |
Issue |
12 |
Pages |
5557 - 5566 |
|
|
Keywords |
Level sets; Mean value coordinates; Parametrized active contours; level sets; mean value coordinates |
|
|
Abstract |
In this paper, we present a framework for image segmentation based on parametrized active contours. The evolving contour is parametrized according to a reduced set of control points that form a closed polygon and have a clear visual interpretation. The parametrization, called mean value coordinates, stems from the techniques used in computer graphics to animate virtual models. Our framework allows to easily formulate region-based energies to segment an image. In particular, we present three different local region-based energy terms: 1) the mean model; 2) the Gaussian model; 3) and the histogram model. We show the behavior of our method on synthetic and real images and compare the performance with state-of-the-art level set methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1057-7149 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ GGI2015 |
Serial |
2673 |
|
Permanent link to this record |
|
|
|
|
Author |
Marta Nuñez-Garcia; Sonja Simpraga; M.Angeles Jurado; Maite Garolera; Roser Pueyo; Laura Igual |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
61-68 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Munich; Germany; October 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MLMI |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ NSJ2015 |
Serial |
2674 |
|
Permanent link to this record |
|
|
|
|
Author |
Chen Zhang; Maria del Mar Vila Muñoz; Petia Radeva; Roberto Elosua; Maria Grau; Angels Betriu; Elvira Fernandez-Giraldez; Laura Igual |
![goto web page url](img/www.gif)
|
|
Title |
Carotid Artery Segmentation in Ultrasound Images |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT2015), Joint MICCAI Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Munich; Germany; October 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVII-STENT |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ZVR2015 |
Serial |
2675 |
|
Permanent link to this record |
|
|
|
|
Author |
Onur Ferhat; Arcadi Llanza; Fernando Vilariño |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Gaze interaction for multi-display systems using natural light eye-tracker |
Type |
Conference Article |
|
Year |
2015 |
Publication |
2nd International Workshop on Solutions for Automatic Gaze Data Analysis |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Bielefeld; Germany; September 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, ascending order (up)](img/sort_asc.gif) |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SAGA |
|
|
Notes |
MV;SIAI |
Approved |
no |
|
|
Call Number |
Admin @ si @ FLV2015b |
Serial |
2676 |
|
Permanent link to this record |