Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Marta Nuñez-Garcia; Sonja Simpraga; M.Angeles Jurado; Maite Garolera; Roser Pueyo; Laura Igual | ||||
Title | FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization | Type | Conference Article | ||
Year | 2015 | Publication | Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 61-68 | ||
Keywords | |||||
Abstract | |||||
Address | Munich; Germany; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MLMI | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ NSJ2015 | Serial | 2674 | ||
Permanent link to this record | |||||
Author | Aniol Lidon; Xavier Giro; Marc Bolaños; Petia Radeva; Markus Seidl; Matthias Zeppelzauer | ||||
Title | UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images | Type | Conference Article | ||
Year | 2015 | Publication | 2015 MediaEval Retrieving Diverse Images Task | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity. | ||||
Address | Wurzen; Germany; September 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @LGB2016 | Serial | 2793 | ||
Permanent link to this record | |||||
Author | Firat Ismailoglu; Ida G. Sprinkhuizen-Kuyper; Evgueni Smirnov; Sergio Escalera; Ralf Peeters | ||||
Title | Fractional Programming Weighted Decoding for Error-Correcting Output Codes | Type | Conference Article | ||
Year | 2015 | Publication | Multiple Classifier Systems, Proceedings of 12th International Workshop , MCS 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 38-50 | ||
Keywords | |||||
Abstract | In order to increase the classification performance obtained using Error-Correcting Output Codes designs (ECOC), introducing weights in the decoding phase of the ECOC has attracted a lot of interest. In this work, we present a method for ECOC designs that focuses on increasing hypothesis margin on the data samples given a base classifier. While achieving this, we implicitly reward the base classifiers with high performance, whereas punish those with low performance. The resulting objective function is of the fractional programming type and we deal with this problem through the Dinkelbach’s Algorithm. The conducted tests over well known UCI datasets show that the presented method is superior to the unweighted decoding and that it outperforms the results of the state-of-the-art weighted decoding methods in most of the performed experiments. | ||||
Address | Gunzburg; Germany; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-319-20247-1 | Medium | ||
Area | Expedition | Conference | MCS | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ ISS2015 | Serial | 2601 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño | ||||
Title | Computer Vision and Performing Arts | Type | Conference Article | ||
Year | 2015 | Publication | Korean Scholars of Marketing Science | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Seoul; Korea; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | KAMS | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | Admin @ si @Vil2015 | Serial | 2799 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez | ||||
Title | Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Intelligent Vehicles Symposium IV2015 | Abbreviated Journal | |
Volume | Issue | Pages | 356-361 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. | ||||
Address | Seoul; Corea; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IV | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVX2015 | Serial | 2625 | ||
Permanent link to this record | |||||
Author | Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich | ||||
Title | DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition | Type | Conference Article | ||
Year | 2015 | Publication | Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II | Abbreviated Journal | |
Volume | 9475 | Issue | Pages | 463-473 | |
Keywords | Projector-camera systems; Feature descriptors; Object recognition | ||||
Abstract | Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-27862-9 | Medium | |
Area | Expedition | Conference | ISVC | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ SMG2015 | Serial | 2736 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Dimosthenis Karatzas; Josep Llados | ||||
Title | Automatic Verification of Properly Signed Multi-page Document Images | Type | Conference Article | ||
Year | 2015 | Publication | Proceedings of the Eleventh International Symposium on Visual Computing | Abbreviated Journal | |
Volume | 9475 | Issue | Pages | 327-336 | |
Keywords | Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow | ||||
Abstract | In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages. | ||||
Address | Las Vegas, Nevada, USA; December 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | 9475 | Series Issue | Edition | ||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ISVC | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3189 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Dan Norton; Onur Ferhat | ||||
Title | Memory Fields: DJs in the Library | Type | Conference Article | ||
Year | 2015 | Publication | 21 st Symposium of Electronic Arts | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Vancouver; Canada; August 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ISEA | ||
Notes | ;SIAI | Approved | no | ||
Call Number | Admin @ si @VNF2015 | Serial | 2800 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel Sappa; A. Tom | ||||
Title | Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains | Type | Conference Article | ||
Year | 2015 | Publication | International Conference on Intelligent Robots and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 2488 - 2495 | ||
Keywords | Visual Learning; Computer Vision; Autonomous Agents | ||||
Abstract | In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks. | ||||
Address | Hamburg; Germany; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IROS | ||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OSL2015 | Serial | 2664 | ||
Permanent link to this record | |||||
Author | Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund | ||||
Title | Deep Learning based Super-Resolution for Improved Action Recognition | Type | Conference Article | ||
Year | 2015 | Publication | 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 | Abbreviated Journal | |
Volume | Issue | Pages | 67 - 72 | ||
Keywords | |||||
Abstract | Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos. | ||||
Address | Orleans; France; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IPTA | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ NER2015 | Serial | 2648 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Jorge Bernal; F. Javier Sanchez; Marta Diez-Ferrer; Antoni Rosell; Debora Gil | ||||
Title | Towards On-line Quantification of Tracheal Stenosis from Videobronchoscopy | Type | Conference Article | ||
Year | 2015 | Publication | 6th International Conference on Information Processing in Computer-Assisted Interventions IPCAI2015 | Abbreviated Journal | |
Volume | 10 | Issue | 6 | Pages | 935-945 |
Keywords | |||||
Abstract | PURPOSE:
Lack of objective measurement of tracheal obstruction degree has a negative impact on the chosen treatment prone to lead to unnecessary repeated explorations and other scanners. Accurate computation of tracheal stenosis in videobronchoscopy would constitute a breakthrough for this noninvasive technique and a reduction in operation cost for the public health service. METHODS: Stenosis calculation is based on the comparison of the region delimited by the lumen in an obstructed frame and the region delimited by the first visible ring in a healthy frame. We propose a parametric strategy for the extraction of lumen and tracheal ring regions based on models of their geometry and appearance that guide a deformable model. To ensure a systematic applicability, we present a statistical framework to choose optimal parametric values and a strategy to choose the frames that minimize the impact of scope optical distortion. RESULTS: Our method has been tested in 40 cases covering different stenosed tracheas. Experiments report a non- clinically relevant [Formula: see text] of discrepancy in the calculated stenotic area and a computational time allowing online implementation in the operating room. CONCLUSIONS: Our methodology allows reliable measurements of airway narrowing in the operating room. To fully assess its clinical impact, a prospective clinical trial should be done. |
||||
Address | Barcelona; Spain; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IPCAI | ||
Notes | IAM; MV; 600.075 | Approved | no | ||
Call Number | Admin @ si @ SBS2015b | Serial | 2613 | ||
Permanent link to this record | |||||
Author | Dan Norton; Fernando Vilariño; Onur Ferhat | ||||
Title | Memory Field – Creative Engagement in Digital Collections | Type | Conference Article | ||
Year | 2015 | Publication | Internet Librarian International Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | “Memory Fields” is a trans-disciplinary project aiming at the (re)valorisation of digital collections.Its main deliverable is an interface for a dual screen installation, used to access and mix the public library digital collections. The collections being used in this case are a collection of digitised posters from the Spanish Civil War, belonging to the Arxiu General de Catalunya, and a collection of field recordings made by Dan Norton. The system generates visualisations, and the images and sounds are mixed together using narrative primitives of video dj. Users contribute to the digital collections by adding personal memories and observations. The comments and recollections appear as flowers growing in a “memory field” and memories remain public in a Twitter feed (@Memoryfields). | ||||
Address | London; UK; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ILI | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | Admin @ si @NVF2015 | Serial | 2796 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Jordi Gonzalez; Xavier Baro; Pablo Pardo; Junior Fabian; Marc Oliu; Hugo Jair Escalante; Ivan Huerta; Isabelle Guyon | ||||
Title | ChaLearn Looking at People 2015 new competitions: Age Estimation and Cultural Event Recognition | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | Following previous series on Looking at People (LAP) challenges [1], [2], [3], in 2015 ChaLearn runs two new competitions within the field of Looking at People: age and cultural event recognition in still images. We propose thefirst crowdsourcing application to collect and label data about apparent
age of people instead of the real age. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes both challenges and data, providing some initial baselines. The results of the first round of the competition were presented at ChaLearn LAP 2015 IJCNN special session on computer vision and robotics http://www.dtic.ua.es/∼jgarcia/IJCNN2015. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA; ISE; 600.063; 600.078;MV | Approved | no | ||
Call Number | Admin @ si @ EGB2015 | Serial | 2591 | ||
Permanent link to this record | |||||
Author | Hugo Jair Escalante; Jose Martinez; Sergio Escalera; Victor Ponce; Xavier Baro | ||||
Title | Improving Bag of Visual Words Representations with Genetic Programming | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The bag of visual words is a well established representation in diverse computer vision problems. Taking inspiration from the fields of text mining and retrieval, this representation has proved to be very effective in a large number of domains.
In most cases, a standard term-frequency weighting scheme is considered for representing images and videos in computer vision. This is somewhat surprising, as there are many alternative ways of generating bag of words representations within the text processing community. This paper explores the use of alternative weighting schemes for landmark tasks in computer vision: image categorization and gesture recognition. We study the suitability of using well-known supervised and unsupervised weighting schemes for such tasks. More importantly, we devise a genetic program that learns new ways of representing images and videos under the bag of visual words representation. The proposed method learns to combine term-weighting primitives trying to maximize the classification performance. Experimental results are reported in standard image and video data sets showing the effectiveness of the proposed evolutionary algorithm. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ EME2015 | Serial | 2603 | ||
Permanent link to this record | |||||
Author | Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Alexander Statnikov; Evelyne Viegas | ||||
Title | Design of the 2015 ChaLearn AutoML Challenge | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | ChaLearn is organizing for IJCNN 2015 an Automatic Machine Learning challenge (AutoML) to solve classification and regression problems from given feature representations, without any human intervention. This is a challenge with code
submission: the code submitted can be executed automatically on the challenge servers to train and test learning machines on new datasets. However, there is no obligation to submit code. Half of the prizes can be won by just submitting prediction results. There are six rounds (Prep, Novice, Intermediate, Advanced, Expert, and Master) in which datasets of progressive difficulty are introduced (5 per round). There is no requirement to participate in previous rounds to enter a new round. The rounds alternate AutoML phases in which submitted code is “blind tested” on datasets the participants have never seen before, and Tweakathon phases giving time (' 1 month) to the participants to improve their methods by tweaking their code on those datasets. This challenge will push the state-of-the-art in fully automatic machine learning on a wide range of problems taken from real world applications. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ GBC2015a | Serial | 2604 | ||
Permanent link to this record |