Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez | ||||
Title | Chromatic shadow detection and tracking for moving foreground segmentation | Type | Journal Article | ||
Year | 2015 | Publication | Image and Vision Computing | Abbreviated Journal | IMAVIS |
Volume | 41 | Issue | Pages | 42-53 | |
Keywords | Detecting moving objects; Chromatic shadow detection; Temporal local gradient; Spatial and Temporal brightness and angle distortions; Shadow tracking | ||||
Abstract | Advanced segmentation techniques in the surveillance domain deal with shadows to avoid distortions when detecting moving objects. Most approaches for shadow detection are still typically restricted to penumbra shadows and cannot cope well with umbra shadows. Consequently, umbra shadow regions are usually detected as part of moving objects, thus aecting the performance of the nal detection. In this paper we address the detection of both penumbra and umbra shadow regions. First, a novel bottom-up approach is presented based on gradient and colour models, which successfully discriminates between chromatic moving cast shadow regions and those regions detected as moving objects. In essence, those regions corresponding to potential shadows are detected based on edge partitioning and colour statistics. Subsequently (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for each potential shadow region for detecting the umbra shadow regions. Our second contribution renes even further the segmentation results: a tracking-based top-down approach increases the performance of our bottom-up chromatic shadow detection algorithm by properly correcting non-detected shadows.
To do so, a combination of motion lters in a data association framework exploits the temporal consistency between objects and shadows to increase the shadow detection rate. Experimental results exceed current state-of-the- art in shadow accuracy for multiple well-known surveillance image databases which contain dierent shadowed materials and illumination conditions. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.078; 600.063 | Approved | no | ||
Call Number | Admin @ si @ HHM2015 | Serial | 2703 | ||
Permanent link to this record | |||||
Author | Aura Hernandez-Sabate; Meritxell Joanpere; Nuria Gorgorio; Lluis Albarracin | ||||
Title | Mathematics learning opportunities when playing a Tower Defense Game | Type | Journal | ||
Year | 2015 | Publication | International Journal of Serious Games | Abbreviated Journal | IJSG |
Volume | 2 | Issue | 4 | Pages | 57-71 |
Keywords | Tower Defense game; learning opportunities; mathematics; problem solving; game design | ||||
Abstract | A qualitative research study is presented herein with the purpose of identifying mathematics learning opportunities in students between 10 and 12 years old while playing a commercial version of a Tower Defense game. These learning opportunities are understood as mathematicisable moments of the game and involve the establishment of relationships between the game and mathematical problem solving. Based on the analysis of these mathematicisable moments, we conclude that the game can promote problem-solving processes and learning opportunities that can be associated with different mathematical contents that appears in mathematics curricula, thought it seems that teacher or new game elements might be needed to facilitate the processes. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ HJG2015 | Serial | 2730 | ||
Permanent link to this record | |||||
Author | Marta Nuñez-Garcia; Sonja Simpraga; M.Angeles Jurado; Maite Garolera; Roser Pueyo; Laura Igual | ||||
Title | FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization | Type | Conference Article | ||
Year | 2015 | Publication | Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 61-68 | ||
Keywords | |||||
Abstract | |||||
Address | Munich; Germany; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MLMI | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ NSJ2015 | Serial | 2674 | ||
Permanent link to this record | |||||
Author | David Sanchez-Mendoza; David Masip; Agata Lapedriza | ||||
Title | Emotion recognition from mid-level features | Type | Journal Article | ||
Year | 2015 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 67 | Issue | Part 1 | Pages | 66–74 |
Keywords | Facial expression; Emotion recognition; Action units; Computer vision | ||||
Abstract | In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier B.V. | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0167-8655 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ SML2015 | Serial | 2746 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Maite Garolera; Petia Radeva | ||||
Title | Object Discovery using CNN Features in Egocentric Videos | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 67-74 | |
Keywords | Object discovery; Egocentric videos; Lifelogging; CNN | ||||
Abstract | Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-19389-2 | Medium | |
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ BGR2015 | Serial | 2596 | ||
Permanent link to this record | |||||
Author | Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund | ||||
Title | Deep Learning based Super-Resolution for Improved Action Recognition | Type | Conference Article | ||
Year | 2015 | Publication | 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 | Abbreviated Journal | |
Volume | Issue | Pages | 67 - 72 | ||
Keywords | |||||
Abstract | Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos. | ||||
Address | Orleans; France; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IPTA | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ NER2015 | Serial | 2648 | ||
Permanent link to this record | |||||
Author | Hanne Kause; Aura Hernandez-Sabate; Patricia Marquez; Andrea Fuster; Luc Florack; Hans van Assen; Debora Gil | ||||
Title | Confidence Measures for Assessing the HARP Algorithm in Tagged Magnetic Resonance Imaging | Type | Book Chapter | ||
Year | 2015 | Publication | Statistical Atlases and Computational Models of the Heart. Revised selected papers of Imaging and Modelling Challenges 6th International Workshop, STACOM 2015, Held in Conjunction with MICCAI 2015 | Abbreviated Journal | |
Volume | 9534 | Issue | Pages | 69-79 | |
Keywords | |||||
Abstract | Cardiac deformation and changes therein have been linked to pathologies. Both can be extracted in detail from tagged Magnetic Resonance Imaging (tMRI) using harmonic phase (HARP) images. Although point tracking algorithms have shown to have high accuracies on HARP images, these vary with position. Detecting and discarding areas with unreliable results is crucial for use in clinical support systems. This paper assesses the capability of two confidence measures (CMs), based on energy and image structure, for detecting locations with reduced accuracy in motion tracking results. These CMs were tested on a database of simulated tMRI images containing the most common artifacts that may affect tracking accuracy. CM performance is assessed based on its capability for HARP tracking error bounding and compared in terms of significant differences detected using a multi comparison analysis of variance that takes into account the most influential factors on HARP tracking performance. Results showed that the CM based on image structure was better suited to detect unreliable optical flow vectors. In addition, it was shown that CMs can be used to detect optical flow vectors with large errors in order to improve the optical flow obtained with the HARP tracking algorithm. | ||||
Address | Munich; Germany; January 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-28711-9 | Medium | |
Area | Expedition | Conference | STACOM | ||
Notes | ADAS; IAM; 600.075; 600.076; 600.060; 601.145 | Approved | no | ||
Call Number | Admin @ si @ KHM2015 | Serial | 2734 | ||
Permanent link to this record | |||||
Author | Andres Traumann; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera | Type | Conference Article | ||
Year | 2015 | Publication | 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 75-79 | ||
Keywords | |||||
Abstract | |||||
Address | Boston; EEUU; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ TEA2015 | Serial | 2653 | ||
Permanent link to this record | |||||
Author | Alvaro Cepero; Albert Clapes; Sergio Escalera | ||||
Title | Automatic non-verbal communication skills analysis: a quantitative evaluation | Type | Journal Article | ||
Year | 2015 | Publication | AI Communications | Abbreviated Journal | AIC |
Volume | 28 | Issue | 1 | Pages | 87-101 |
Keywords | Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning | ||||
Abstract | The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0921-7126 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ CCE2015 | Serial | 2549 | ||
Permanent link to this record | |||||
Author | Enric Marti; J.Roncaries; Debora Gil; Aura Hernandez-Sabate; Antoni Gurgui; Ferran Poveda | ||||
Title | PBL On Line: A proposal for the organization, part-time monitoring and assessment of PBL group activities | Type | Journal | ||
Year | 2015 | Publication | Journal of Technology and Science Education | Abbreviated Journal | JOTSE |
Volume | 5 | Issue | 2 | Pages | 87-96 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; ADAS; 600.076; 600.075 | Approved | no | ||
Call Number | Admin @ si @ MRG2015 | Serial | 2608 | ||
Permanent link to this record | |||||
Author | Ramin Irani; Kamal Nasrollahi; Chris Bahnsen; D.H. Lundtoft; Thomas B. Moeslund; Marc O. Simon; Ciprian Corneanu; Sergio Escalera; Tanja L. Pedersen; Maria-Louise Klitgaard; Laura Petrini | ||||
Title | Spatio-temporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition | Type | Conference Article | ||
Year | 2015 | Publication | 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 88-95 | ||
Keywords | |||||
Abstract | Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames improving more than 6% over RGB only analysis in similar conditions. |
||||
Address | Boston; EEUU; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ INB2015 | Serial | 2654 | ||
Permanent link to this record | |||||
Author | Santiago Segui; Oriol Pujol; Jordi Vitria | ||||
Title | Learning to count with deep object features | Type | Conference Article | ||
Year | 2015 | Publication | Deep Vision: Deep Learning in Computer Vision, CVPR 2015 Workshop | Abbreviated Journal | |
Volume | Issue | Pages | 90-96 | ||
Keywords | |||||
Abstract | Learning to count is a learning strategy that has been recently proposed in the literature for dealing with problems where estimating the number of object instances in a scene is the final objective. In this framework, the task of learning to detect and localize individual object instances is seen as a harder task that can be evaded by casting the problem as that of computing a regression value from hand-crafted image features. In this paper we explore the features that are learned when training a counting convolutional neural
network in order to understand their underlying representation. To this end we define a counting problem for MNIST data and show that the internal representation of the network is able to classify digits in spite of the fact that no direct supervision was provided for them during training. We also present preliminary results about a deep network that is able to count the number of pedestrians in a scene. |
||||
Address | Boston; USA; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MILAB; HuPBA; OR;MV | Approved | no | ||
Call Number | Admin @ si @ SPV2015 | Serial | 2636 | ||
Permanent link to this record | |||||
Author | Josep M. Gonfaus; Marco Pedersoli; Jordi Gonzalez; Andrea Vedaldi; Xavier Roca | ||||
Title | Factorized appearances for object detection | Type | Journal Article | ||
Year | 2015 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 138 | Issue | Pages | 92–101 | |
Keywords | Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts | ||||
Abstract | Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure. Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE; 600.063; 600.078 | Approved | no | ||
Call Number | Admin @ si @ GPG2015 | Serial | 2705 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Oriol Ramos Terrades; Patricia Marquez; Enric Marti; J.Roncaries; Debora Gil | ||||
Title | Automatic evaluation of practices in Moodle for Self Learning in Engineering | Type | Journal | ||
Year | 2015 | Publication | Journal of Technology and Science Education | Abbreviated Journal | JOTSE |
Volume | 5 | Issue | 2 | Pages | 97-106 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; DAG; 600.075; 600.077 | Approved | no | ||
Call Number | Admin @ si @ SRM2015 | Serial | 2610 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Debora Gil; Cristina Rodriguez de Miguel; Fernando Vilariño | ||||
Title | WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians | Type | Journal Article | ||
Year | 2015 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 43 | Issue | Pages | 99-111 | |
Keywords | Polyp localization; Energy Maps; Colonoscopy; Saliency; Valley detection | ||||
Abstract | We introduce in this paper a novel polyp localization method for colonoscopy videos. Our method is based on a model of appearance for polyps which defines polyp boundaries in terms of valley information. We propose the integration of valley information in a robust way fostering complete, concave and continuous boundaries typically associated to polyps. This integration is done by using a window of radial sectors which accumulate valley information to create WMDOVA1 energy maps related with the likelihood of polyp presence. We perform a double validation of our maps, which include the introduction of two new databases, including the first, up to our knowledge, fully annotated database with clinical metadata associated. First we assess that the highest value corresponds with the location of the polyp in the image. Second, we show that WM-DOVA energy maps can be comparable with saliency maps obtained from physicians' fixations obtained via an eye-tracker. Finally, we prove that our method outperforms state-of-the-art computational saliency results. Our method shows good performance, particularly for small polyps which are reported to be the main sources of polyp miss-rate, which indicates the potential applicability of our method in clinical practice. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0895-6111 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MV; IAM; 600.047; 600.060; 600.075;SIAI | Approved | no | ||
Call Number | Admin @ si @ BSF2015 | Serial | 2609 | ||
Permanent link to this record |