Home | [141–150] << 151 152 153 154 155 156 157 158 159 160 >> [161–170] |
Records | |||||
---|---|---|---|---|---|
Author | Marcelo D. Pistarelli; Angel Sappa; Ricardo Toledo | ||||
Title | Multispectral Stereo Image Correspondence | Type | Conference Article | ||
Year | 2013 | Publication | 15th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | |
Volume | 8048 | Issue | Pages | 217-224 | |
Keywords | |||||
Abstract | This paper presents a novel multispectral stereo image correspondence approach. It is evaluated using a stereo rig constructed with a visible spectrum camera and a long wave infrared spectrum camera. The novelty of the proposed approach lies on the usage of Hough space as a correspondence search domain. In this way it avoids searching for correspondence in the original multispectral image domains, where information is low correlated, and a common domain is used. The proposed approach is intended to be used in outdoor urban scenarios, where images contain large amount of edges. These edges are used as distinctive characteristics for the matching in the Hough space. Experimental results are provided showing the validity of the proposed approach. | ||||
Address | York; uk; August 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-40245-6 | Medium | |
Area | Expedition | Conference | CAIP | ||
Notes | ADAS; 600.055 | Approved | no | ||
Call Number | Admin @ si @ PST2013 | Serial | 2561 | ||
Permanent link to this record | |||||
Author | Gioacchino Vino; Angel Sappa | ||||
Title | Revisiting Harris Corner Detector Algorithm: a Gradual Thresholding Approach | Type | Conference Article | ||
Year | 2013 | Publication | 10th International Conference on Image Analysis and Recognition | Abbreviated Journal | |
Volume | 7950 | Issue | Pages | 354-363 | |
Keywords | |||||
Abstract | This paper presents an adaptive thresholding approach intended to increase the number of detected corners, while reducing the amount of those ones corresponding to noisy data. The proposed approach works by using the classical Harris corner detector algorithm and overcome the difficulty in finding a general threshold that work well for all the images in a given data set by proposing a novel adaptive thresholding scheme. Initially, two thresholds are used to discern between strong corners and flat regions. Then, a region based criteria is used to discriminate between weak corners and noisy points in the midway interval. Experimental results show that the proposed approach has a better capability to reject false corners and, at the same time, to detect weak ones. Comparisons with the state of the art are provided showing the validity of the proposed approach. | ||||
Address | Póvoa de Varzim; Portugal; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-39093-7 | Medium | |
Area | Expedition | Conference | ICIAR | ||
Notes | ADAS; 600.055 | Approved | no | ||
Call Number | Admin @ si @ ViS2013 | Serial | 2562 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez | ||||
Title | Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Intelligent Vehicles Symposium IV2015 | Abbreviated Journal | |
Volume | Issue | Pages | 356-361 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. | ||||
Address | Seoul; Corea; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IV | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVX2015 | Serial | 2625 | ||
Permanent link to this record | |||||
Author | P. Wang; V. Eglin; C. Garcia; C. Largeron; Josep Llados; Alicia Fornes | ||||
Title | Représentation par graphe de mots manuscrits dans les images pour la recherche par similarité | Type | Conference Article | ||
Year | 2014 | Publication | Colloque International Francophone sur l'Écrit et le Document | Abbreviated Journal | |
Volume | Issue | Pages | 233-248 | ||
Keywords | word spotting; graph-based representation; shape context description; graph edit distance; DTW; block merging; query by example | ||||
Abstract | Effective information retrieval on handwritten document images has always been
a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labeled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment results introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods. |
||||
Address | Nancy; Francia; March 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CIFED | ||
Notes | DAG; 600.061; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ WEG2014c | Serial | 2564 | ||
Permanent link to this record | |||||
Author | Michal Drozdzal; Jordi Vitria; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Petia Radeva | ||||
Title | Intestinal event segmentation for endoluminal video analysis | Type | Conference Article | ||
Year | 2014 | Publication | 21st IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 3592 - 3596 | ||
Keywords | |||||
Abstract | |||||
Address | Paris; Francia; October 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes | MILAB; OR;MV | Approved | no | ||
Call Number | Admin @ si @ DVS2014 | Serial | 2565 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez | ||||
Title | DA-DPM Pedestrian Detection | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Reconstruction meets Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Domain Adaptation; Pedestrian Detection | ||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW-RR | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ XRV2013 | Serial | 2569 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3D-Guided Multiscale Sliding Window for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 560-568 | |
Keywords | Pedestrian Detection | ||||
Abstract | The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVR2015 | Serial | 2585 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Fahad Shahbaz Khan | ||||
Title | An Overview of Color Name Applications in Computer Vision | Type | Conference Article | ||
Year | 2015 | Publication | Computational Color Imaging Workshop | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | color features; color names; object recognition | ||||
Abstract | In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation. | ||||
Address | Saint Etienne; France; March 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CCIW | ||
Notes | LAMP; 600.079; 600.068;CIC | Approved | no | ||
Call Number | Admin @ si @ WeK2015 | Serial | 2586 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Jordi Gonzalez; Xavier Baro; Pablo Pardo; Junior Fabian; Marc Oliu; Hugo Jair Escalante; Ivan Huerta; Isabelle Guyon | ||||
Title | ChaLearn Looking at People 2015 new competitions: Age Estimation and Cultural Event Recognition | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | Following previous series on Looking at People (LAP) challenges [1], [2], [3], in 2015 ChaLearn runs two new competitions within the field of Looking at People: age and cultural event recognition in still images. We propose thefirst crowdsourcing application to collect and label data about apparent
age of people instead of the real age. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes both challenges and data, providing some initial baselines. The results of the first round of the competition were presented at ChaLearn LAP 2015 IJCNN special session on computer vision and robotics http://www.dtic.ua.es/∼jgarcia/IJCNN2015. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA; ISE; 600.063; 600.078;MV;OR | Approved | no | ||
Call Number | Admin @ si @ EGB2015 | Serial | 2591 | ||
Permanent link to this record | |||||
Author | Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio | ||||
Title | FitNets: Hints for Thin Deep Nets | Type | Conference Article | ||
Year | 2015 | Publication | 3rd International Conference on Learning Representations ICLR2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing | ||||
Abstract | While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network. | ||||
Address | San Diego; CA; May 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICLR | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ RBK2015 | Serial | 2593 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Maite Garolera; Petia Radeva | ||||
Title | Object Discovery using CNN Features in Egocentric Videos | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 67-74 | |
Keywords | Object discovery; Egocentric videos; Lifelogging; CNN | ||||
Abstract | Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-19389-2 | Medium | |
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ BGR2015 | Serial | 2596 | ||
Permanent link to this record | |||||
Author | Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva | ||||
Title | R-clustering for egocentric video segmentation | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 327-336 | |
Keywords | Temporal video segmentation; Egocentric videos; Clustering | ||||
Abstract | In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-19389-2 | Medium | |
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ TDB2015 | Serial | 2597 | ||
Permanent link to this record | |||||
Author | Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika | ||||
Title | Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics | Type | Conference Article | ||
Year | 2014 | Publication | 1st Workshop on Computer Vision for Affective Computing | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression. |
||||
Address | Singapore; November 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACCV | ||
Notes | LAMP;;MV | Approved | no | ||
Call Number | Admin @ si @ RBD2014 | Serial | 2599 | ||
Permanent link to this record | |||||
Author | Firat Ismailoglu; Ida G. Sprinkhuizen-Kuyper; Evgueni Smirnov; Sergio Escalera; Ralf Peeters | ||||
Title | Fractional Programming Weighted Decoding for Error-Correcting Output Codes | Type | Conference Article | ||
Year | 2015 | Publication | Multiple Classifier Systems, Proceedings of 12th International Workshop , MCS 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 38-50 | ||
Keywords | |||||
Abstract | In order to increase the classification performance obtained using Error-Correcting Output Codes designs (ECOC), introducing weights in the decoding phase of the ECOC has attracted a lot of interest. In this work, we present a method for ECOC designs that focuses on increasing hypothesis margin on the data samples given a base classifier. While achieving this, we implicitly reward the base classifiers with high performance, whereas punish those with low performance. The resulting objective function is of the fractional programming type and we deal with this problem through the Dinkelbach’s Algorithm. The conducted tests over well known UCI datasets show that the presented method is superior to the unweighted decoding and that it outperforms the results of the state-of-the-art weighted decoding methods in most of the performed experiments. | ||||
Address | Gunzburg; Germany; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-319-20247-1 | Medium | ||
Area | Expedition | Conference | MCS | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ ISS2015 | Serial | 2601 | ||
Permanent link to this record | |||||
Author | Hugo Jair Escalante; Jose Martinez; Sergio Escalera; Victor Ponce; Xavier Baro | ||||
Title | Improving Bag of Visual Words Representations with Genetic Programming | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The bag of visual words is a well established representation in diverse computer vision problems. Taking inspiration from the fields of text mining and retrieval, this representation has proved to be very effective in a large number of domains.
In most cases, a standard term-frequency weighting scheme is considered for representing images and videos in computer vision. This is somewhat surprising, as there are many alternative ways of generating bag of words representations within the text processing community. This paper explores the use of alternative weighting schemes for landmark tasks in computer vision: image categorization and gesture recognition. We study the suitability of using well-known supervised and unsupervised weighting schemes for such tasks. More importantly, we devise a genetic program that learns new ways of representing images and videos under the bag of visual words representation. The proposed method learns to combine term-weighting primitives trying to maximize the classification performance. Experimental results are reported in standard image and video data sets showing the effectiveness of the proposed evolutionary algorithm. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA;MV;OR | Approved | no | ||
Call Number | Admin @ si @ EME2015 | Serial | 2603 | ||
Permanent link to this record |