Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Naveen Onkarappa; Angel Sappa | ||||
Title | Synthetic sequences and ground-truth flow field generation for algorithm validation | Type | Journal Article | ||
Year | 2015 | Publication | Multimedia Tools and Applications | Abbreviated Journal | MTAP |
Volume | 74 | Issue | 9 | Pages | 3121-3135 |
Keywords | Ground-truth optical flow; Synthetic sequence; Algorithm validation | ||||
Abstract | Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1380-7501 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 601.215; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OnS2014b | Serial | 2472 | ||
Permanent link to this record | |||||
Author | Jean-Christophe Burie; J. Chazalon; M. Coustaty; S. Eskenazi; Muhammad Muzzamil Luqman; M. Mehri; Nibal Nayef; Jean-Marc Ogier; S. Prum; Marçal Rusiñol | ||||
Title | ICDAR2015 Competition on Smartphone Document Capture and OCR (SmartDoc) | Type | Conference Article | ||
Year | 2015 | Publication | 13th International Conference on Document Analysis and Recognition ICDAR2015 | Abbreviated Journal | |
Volume | Issue | Pages | 1161 - 1165 | ||
Keywords | |||||
Abstract | Smartphones are enabling new ways of capture,
hence arises the need for seamless and reliable acquisition and digitization of documents, in order to convert them to editable, searchable and a more human-readable format. Current stateof-the-art works lack databases and baseline benchmarks for digitizing mobile captured documents. We have organized a competition for mobile document capture and OCR in order to address this issue. The competition is structured into two independent challenges: smartphone document capture, and smartphone OCR. This report describes the datasets for both challenges along with their ground truth, details the performance evaluation protocols which we used, and presents the final results of the participating methods. In total, we received 13 submissions: 8 for challenge-I, and 5 for challenge-2. |
||||
Address | Nancy; France; August 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.077; 601.223; 600.084 | Approved | no | ||
Call Number | Admin @ si @ BCC2015 | Serial | 2681 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Francesc J. Ferri; W. Diaz | ||||
Title | Incremental Generalized Discriminative Common Vectors for Image Classification | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Neural Networks and Learning Systems | Abbreviated Journal | TNNLS |
Volume | 26 | Issue | 8 | Pages | 1761 - 1775 |
Keywords | |||||
Abstract | Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2162-237X | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ DFD2015 | Serial | 2547 | ||
Permanent link to this record | |||||
Author | Hugo Jair Escalante; Jose Martinez; Sergio Escalera; Victor Ponce; Xavier Baro | ||||
Title | Improving Bag of Visual Words Representations with Genetic Programming | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The bag of visual words is a well established representation in diverse computer vision problems. Taking inspiration from the fields of text mining and retrieval, this representation has proved to be very effective in a large number of domains.
In most cases, a standard term-frequency weighting scheme is considered for representing images and videos in computer vision. This is somewhat surprising, as there are many alternative ways of generating bag of words representations within the text processing community. This paper explores the use of alternative weighting schemes for landmark tasks in computer vision: image categorization and gesture recognition. We study the suitability of using well-known supervised and unsupervised weighting schemes for such tasks. More importantly, we devise a genetic program that learns new ways of representing images and videos under the bag of visual words representation. The proposed method learns to combine term-weighting primitives trying to maximize the classification performance. Experimental results are reported in standard image and video data sets showing the effectiveness of the proposed evolutionary algorithm. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ EME2015 | Serial | 2603 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados | ||||
Title | A Study of Bag-of-Visual-Words Representations for Handwritten Keyword Spotting | Type | Journal Article | ||
Year | 2015 | Publication | International Journal on Document Analysis and Recognition | Abbreviated Journal | IJDAR |
Volume | 18 | Issue | 3 | Pages | 223-234 |
Keywords | Bag-of-Visual-Words; Keyword spotting; Handwritten documents; Performance evaluation | ||||
Abstract | The Bag-of-Visual-Words (BoVW) framework has gained popularity among the document image analysis community, specifically as a representation of handwritten words for recognition or spotting purposes. Although in the computer vision field the BoVW method has been greatly improved, most of the approaches in the document image analysis domain still rely on the basic implementation of the BoVW method disregarding such latest refinements. In this paper, we present a review of those improvements and its application to the keyword spotting task. We thoroughly evaluate their impact against a baseline system in the well-known George Washington dataset and compare the obtained results against nine state-of-the-art keyword spotting methods. In addition, we also compare both the baseline and improved systems with the methods presented at the Handwritten Keyword Spotting Competition 2014. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1433-2833 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; ADAS; 600.055; 600.061; 601.223; 600.077; 600.097 | Approved | no | ||
Call Number | Admin @ si @ ART2015 | Serial | 2679 | ||
Permanent link to this record | |||||
Author | Gerard Canal; Cecilio Angulo; Sergio Escalera | ||||
Title | Gesture based Human Multi-Robot interaction | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Joint Conference on Neural Networks IJCNN2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The emergence of robot applications for nontechnical users implies designing new ways of interaction between robotic platforms and users. The main goal of this work is the development of a gestural interface to interact with robots
in a similar way as humans do, allowing the user to provide information of the task with non-verbal communication. The gesture recognition application has been implemented using the Microsoft’s KinectTM v2 sensor. Hence, a real-time algorithm based on skeletal features is described to deal with both, static gestures and dynamic ones, being the latter recognized using a weighted Dynamic Time Warping method. The gesture recognition application has been implemented in a multi-robot case. A NAO humanoid robot is in charge of interacting with the users and respond to the visual signals they produce. Moreover, a wheeled Wifibot robot carries both the sensor and the NAO robot, easing navigation when necessary. A broad set of user tests have been carried out demonstrating that the system is, indeed, a natural approach to human robot interaction, with a fast response and easy to use, showing high gesture recognition rates. |
||||
Address | Killarney; Ireland; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IJCNN | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | CAE2015a | Serial | 2651 | ||
Permanent link to this record | |||||
Author | Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata | ||||
Title | Real-Time Face Detection and Tracking Utilising OpenMP and ROS | Type | Conference Article | ||
Year | 2015 | Publication | 3rd Asia-Pacific Conference on Computer Aided System Engineering | Abbreviated Journal | |
Volume | Issue | Pages | 179 - 184 | ||
Keywords | RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP | ||||
Abstract | The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction. |
||||
Address | Quito; Ecuador; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | APCASE | ||
Notes | NEUROBIT | Approved | no | ||
Call Number | Admin @ si @ TAG2015 | Serial | 2659 | ||
Permanent link to this record | |||||
Author | Olivier Lefebvre; Pau Riba; Charles Fournier; Alicia Fornes; Josep Llados; Rejean Plamondon; Jules Gagnon-Marchand | ||||
Title | Monitoring neuromotricity on-line: a cloud computing approach | Type | Conference Article | ||
Year | 2015 | Publication | 17th Conference of the International Graphonomics Society IGS2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The goal of our experiment is to develop a useful and accessible tool that can be used to evaluate a patient's health by analyzing handwritten strokes. We use a cloud computing approach to analyze stroke data sampled on a commercial tablet working on the Android platform and a distant server to perform complex calculations using the Delta and Sigma lognormal algorithms. A Google Drive account is used to store the data and to ease the development of the project. The communication between the tablet, the cloud and the server is encrypted to ensure biomedical information confidentiality. Highly parameterized biomedical tests are implemented on the tablet as well as a free drawing test to evaluate the validity of the data acquired by the first test compared to the second one. A blurred shape model descriptor pattern recognition algorithm is used to classify the data obtained by the free drawing test. The functions presented in this paper are still currently under development and other improvements are needed before launching the application in the public domain. | ||||
Address | Pointe-à-Pitre; Guadeloupe; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IGS | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ LRF2015 | Serial | 2617 | ||
Permanent link to this record | |||||
Author | J.Poujol; Cristhian A. Aguilera-Carrasco; E.Danos; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa | ||||
Title | Visible-Thermal Fusion based Monocular Visual Odometry | Type | Conference Article | ||
Year | 2015 | Publication | 2nd Iberian Robotics Conference ROBOT2015 | Abbreviated Journal | |
Volume | 417 | Issue | Pages | 517-528 | |
Keywords | Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion. | ||||
Abstract | The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach. |
||||
Address | Lisboa; Portugal; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2194-5357 | ISBN | 978-3-319-27145-3 | Medium | |
Area | Expedition | Conference | ROBOT | ||
Notes | ADAS; 600.076; 600.086 | Approved | no | ||
Call Number | Admin @ si @ PAD2015 | Serial | 2663 | ||
Permanent link to this record | |||||
Author | Wenjuan Gong; Y.Huang; Jordi Gonzalez; Liang Wang | ||||
Title | An Effective Solution to Double Counting Problem in Human Pose Estimation | Type | Miscellaneous | ||
Year | 2015 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Pose estimation; double counting problem; mix-ture of parts Model | ||||
Abstract | The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model or as latent variables for pedestrian detection. Even in the era of massive applications of deep learning techniques, the mixture of parts model is still effective in solving certain problems, especially in the case with limited numbers of training samples. In this paper, we consider using the mixture of parts model for pose estimation, wherein a tree structure is utilized for representing relations between connected body parts. This strategy facilitates training and inferencing of the model but suffers from double counting problems, where one detected body part is counted twice due to lack of constrains among unconnected body parts. To solve this problem, we propose a generalized solution in which various part attributes are captured by multiple features so as to avoid the double counted problem. Qualitative and quantitative experimental results on a public available dataset demonstrate the effectiveness of our proposed method. An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015]. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.078 | Approved | no | ||
Call Number | Admin @ si @ GHG2015 | Serial | 2590 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3D-Guided Multiscale Sliding Window for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 560-568 | |
Keywords | Pedestrian Detection | ||||
Abstract | The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVR2015 | Serial | 2585 | ||
Permanent link to this record | |||||
Author | Alvaro Cepero; Albert Clapes; Sergio Escalera | ||||
Title | Automatic non-verbal communication skills analysis: a quantitative evaluation | Type | Journal Article | ||
Year | 2015 | Publication | AI Communications | Abbreviated Journal | AIC |
Volume | 28 | Issue | 1 | Pages | 87-101 |
Keywords | Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning | ||||
Abstract | The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-7126 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ CCE2015 | Serial | 2549 | ||
Permanent link to this record | |||||
Author | Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera; Albert Clapes; Kamal Nasrollahi; Michael Holte; Thomas B. Moeslund | ||||
Title | Keep it Accurate and Diverse: Enhancing Action Recognition Performance by Ensemble Learning | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 22-29 | ||
Keywords | |||||
Abstract | The performance of different action recognition techniques has recently been studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of action learning techniques, each performing the recognition task from a different perspective.
The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple and diverse classifiers, each trained with different feature set. In addition, combining the outputs of several learners can reduce the risk of an unfortunate selection of a learner on an unseen action recognition scenario. This leads to having a more robust and general-applicable framework. In order to improve the recognition performance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use of diversity of base learners trained on different sources of information. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology. |
||||
Address | Boston; EEUU; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ BGE2015 | Serial | 2655 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Joost Van de Weijer | ||||
Title | Global Color Sparseness and a Local Statistics Prior for Fast Bilateral Filtering | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 24 | Issue | 12 | Pages | 5842-5853 |
Keywords | |||||
Abstract | The property of smoothing while preserving edges makes the bilateral filter a very popular image processing tool. However, its non-linear nature results in a computationally costly operation. Various works propose fast approximations to the bilateral filter. However, the majority does not generalize to vector input as is the case with color images. We propose a fast approximation to the bilateral filter for color images. The filter is based on two ideas. First, the number of colors, which occur in a single natural image, is limited. We exploit this color sparseness to rewrite the initial non-linear bilateral filter as a number of linear filter operations. Second, we impose a statistical prior to the image values that are locally present within the filter window. We show that this statistical prior leads to a closed-form solution of the bilateral filter. Finally, we combine both ideas into a single fast and accurate bilateral filter for color images. Experimental results show that our bilateral filter based on the local prior yields an extremely fast bilateral filter approximation, but with limited accuracy, which has potential application in real-time video filtering. Our bilateral filter, which combines color sparseness and local statistics, yields a fast and accurate bilateral filter approximation and obtains the state-of-the-art results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP; 600.079;ISE | Approved | no | ||
Call Number | Admin @ si @ MoW2015b | Serial | 2689 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez | ||||
Title | From pixels to gestures: learning visual representations for human analysis in color and depth data sequences | Type | Book Whole | ||
Year | 2015 | Publication | PhD Thesis, Universitat de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The visual analysis of humans from images is an important topic of interest due to its relevance to many computer vision applications like pedestrian detection, monitoring and surveillance, human-computer interaction, e-health or content-based image retrieval, among others.
In this dissertation we are interested in learning different visual representations of the human body that are helpful for the visual analysis of humans in images and video sequences. To that end, we analyze both RGB and depth image modalities and address the problem from three different research lines, at different levels of abstraction; from pixels to gestures: human segmentation, human pose estimation and gesture recognition. First, we show how binary segmentation (object vs. background) of the human body in image sequences is helpful to remove all the background clutter present in the scene. The presented method, based on Graph cuts optimization, enforces spatio-temporal consistency of the produced segmentation masks among consecutive frames. Secondly, we present a framework for multi-label segmentation for obtaining much more detailed segmentation masks: instead of just obtaining a binary representation separating the human body from the background, finer segmentation masks can be obtained separating the different body parts. At a higher level of abstraction, we aim for a simpler yet descriptive representation of the human body. Human pose estimation methods usually rely on skeletal models of the human body, formed by segments (or rectangles) that represent the body limbs, appropriately connected following the kinematic constraints of the human body. In practice, such skeletal models must fulfill some constraints in order to allow for efficient inference, while actually limiting the expressiveness of the model. In order to cope with this, we introduce a top-down approach for predicting the position of the body parts in the model, using a mid-level part representation based on Poselets. Finally, we propose a framework for gesture recognition based on the bag of visual words framework. We leverage the benefits of RGB and depth image modalities by combining modality-specific visual vocabularies in a late fusion fashion. A new rotation-variant depth descriptor is presented, yielding better results than other state-of-the-art descriptors. Moreover, spatio-temporal pyramids are used to encode rough spatial and temporal structure. In addition, we present a probabilistic reformulation of Dynamic Time Warping for gesture segmentation in video sequences. A Gaussian-based probabilistic model of a gesture is learnt, implicitly encoding possible deformations in both spatial and time domains. |
||||
Address | January 2015 | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Sergio Escalera;Stan Sclaroff | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-0-2 | Medium | ||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ Her2015 | Serial | 2576 | ||
Permanent link to this record |