|   | 
Details
   web
Records
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva
Title (down) Multi-Face Tracking by Extended Bag-of-Tracklets in Egocentric Videos Type Miscellaneous
Year 2015 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Egocentric images offer a hands-free way to record daily experiences and special events, where social interactions are of special interest. A natural question that arises is how to extract and track the appearance of multiple persons in a social event captured by a wearable camera. In this paper, we propose a novel method to find correspondences of multiple-faces in low temporal resolution egocentric sequences acquired through a wearable camera. This kind of sequences imposes additional challenges to the multitracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution (2 fpm), abrupt changes in the field of view, in illumination conditions and in the target location are very frequent. To overcome such a difficulty, we propose to generate, for each detected face, a set of correspondences along the whole sequence that we call tracklet and to take advantage of their redundancy to deal with both false positive face detections and unreliable tracklets. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which are aimed to correspond to specific persons. Finally, a prototype tracklet is extracted for each eBoT. We validated our method over a dataset of 18.000 images from 38 egocentric sequences with 52 trackable persons and compared to the state-of-the-art methods, demonstrating its effectiveness and robustness.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ ADR2015b Serial 2713
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva
Title (down) Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams Type Journal Article
Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU
Volume 149 Issue Pages 146-156
Keywords
Abstract Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; Approved no
Call Number Admin @ si @ ADR2016b Serial 2742
Permanent link to this record
 

 
Author Juan Jose Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger
Title (down) Multi-class structural damage segmentation using fully convolutional networks Type Journal Article
Year 2019 Publication Computers in Industry Abbreviated Journal COMPUTIND
Volume 112 Issue Pages 103121
Keywords Bridge damage detection; Deep learning; Semantic segmentation
Abstract Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ RKL2019 Serial 3315
Permanent link to this record
 

 
Author Eloi Puertas; Sergio Escalera; Oriol Pujol
Title (down) Multi-Class Multi-Scale Stacked Sequential Learning Type Conference Article
Year 2011 Publication 10th International Conference on Multiple Classifier Systems Abbreviated Journal
Volume 6713 Issue Pages 197-206
Keywords
Abstract
Address Napoles, Italy
Corporate Author Thesis
Publisher Springer Place of Publication Editor Carlo Sansone; Josef Kittler; Fabio Roli
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MCS
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ PEP2011b Serial 1772
Permanent link to this record
 

 
Author Francesco Ciompi
Title (down) Multi-Class Learning for Vessel Characterization in Intravascular Ultrasound Type Book Whole
Year 2012 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this thesis we tackle the problem of automatic characterization of human coronary vessel in Intravascular Ultrasound (IVUS) image modality. The basis for the whole characterization process is machine learning applied to multi-class problems. In all the presented approaches, the Error-Correcting Output Codes (ECOC) framework is used as central element for the design of multi-class classifiers.
Two main topics are tackled in this thesis. First, the automatic detection of the vessel borders is presented. For this purpose, a novel context-aware classifier for multi-class classification of the vessel morphology is presented, namely ECOC-DRF. Based on ECOC-DRF, the lumen border and the media-adventitia border in IVUS are robustly detected by means of a novel holistic approach, achieving an error comparable with inter-observer variability and with state of the art methods.
The two vessel borders define the atheroma area of the vessel. In this area, tissue characterization is required. For this purpose, we present a framework for automatic plaque characterization by processing both texture in IVUS images and spectral information in raw Radio Frequency data. Furthermore, a novel method for fusing in-vivo and in-vitro IVUS data for plaque characterization is presented, namely pSFFS. The method demonstrates to effectively fuse data generating a classifier that improves the tissue characterization in both in-vitro and in-vivo datasets.
A novel method for automatic video summarization in IVUS sequences is also presented. The method aims to detect the key frames of the sequence, i.e., the frames representative of morphological changes. This novel method represents the basis for video summarization in IVUS as well as the markers for the partition of the vessel into morphological and clinically interesting events.
Finally, multi-class learning based on ECOC is applied to lung tissue characterization in Computed Tomography. The novel proposed approach, based on supervised and unsupervised learning, achieves accurate tissue classification on a large and heterogeneous dataset.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Petia Radeva;Oriol Pujol
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ Cio2012 Serial 2146
Permanent link to this record
 

 
Author Sergio Escalera; David M.J. Tax; Oriol Pujol; Petia Radeva; Robert P.W. Duin
Title (down) Multi-Class Classification in Image Analysis Via Error-Correcting Output Codes Type Book Chapter
Year 2011 Publication Innovations in Intelligent Image Analysis Abbreviated Journal
Volume 339 Issue Pages 7-29
Keywords
Abstract A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem.A classification decision is obtained by assigning the label of the class with the closest code. In this paper, we overview the state-of-the-art on ECOC designs and test them in real applications. Results on different multi-class data sets show the benefits of using the ensemble of classifiers when categorizing objects in images.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Berlin Editor H. Kawasnicka; L.Jain
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1860-949X ISBN 978-3-642-17933-4 Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ ETP2011 Serial 1746
Permanent link to this record
 

 
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Petia Radeva
Title (down) Multi-class Binary Symbol Classification with Circular Blurred Shape Models Type Conference Article
Year 2009 Publication 15th International Conference on Image Analysis and Processing Abbreviated Journal
Volume 5716 Issue Pages 1005–1014
Keywords
Abstract Multi-class binary symbol classification requires the use of rich descriptors and robust classifiers. Shape representation is a difficult task because of several symbol distortions, such as occlusions, elastic deformations, gaps or noise. In this paper, we present the Circular Blurred Shape Model descriptor. This descriptor encodes the arrangement information of object parts in a correlogram structure. A prior blurring degree defines the level of distortion allowed to the symbol. Moreover, we learn the new feature space using a set of Adaboost classifiers, which are combined in the Error-Correcting Output Codes framework to deal with the multi-class categorization problem. The presented work has been validated over different multi-class data sets, and compared to the state-of-the-art descriptors, showing significant performance improvements.
Address Salerno, Italy
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-04145-7 Medium
Area Expedition Conference ICIAP
Notes MILAB;HuPBA;DAG Approved no
Call Number BCNPCL @ bcnpcl @ EFP2009c Serial 1186
Permanent link to this record
 

 
Author Sergio Escalera; Alicia Fornes; Oriol Pujol; Josep Llados; Petia Radeva
Title (down) Multi-class Binary Object Categorization using Blurred Shape Models Type Conference Article
Year 2007 Publication Progress in Pattern Recognition, Image Analysis and Applications, 12th Iberoamerican Congress on Pattern Abbreviated Journal
Volume 4756 Issue Pages 773–782
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LCNS
Series Volume Series Issue Edition
ISSN ISBN 978-3-540-76724-4 Medium
Area Expedition Conference CIARP
Notes MILAB; DAG;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ EFP2007 Serial 911
Permanent link to this record
 

 
Author Victor M. Campello; Polyxeni Gkontra; Cristian Izquierdo; Carlos Martin-Isla; Alireza Sojoudi; Peter M. Full; Klaus Maier-Hein; Yao Zhang; Zhiqiang He; Jun Ma; Mario Parreno; Alberto Albiol; Fanwei Kong; Shawn C. Shadden; Jorge Corral Acero; Vaanathi Sundaresan; Mina Saber; Mustafa Elattar; Hongwei Li; Bjoern Menze; Firas Khader; Christoph Haarburger; Cian M. Scannell; Mitko Veta; Adam Carscadden; Kumaradevan Punithakumar; Xiao Liu; Sotirios A. Tsaftaris; Xiaoqiong Huang; Xin Yang; Lei Li; Xiahai Zhuang; David Vilades; Martin L. Descalzo; Andrea Guala; Lucia La Mura; Matthias G. Friedrich; Ria Garg; Julie Lebel; Filipe Henriques; Mahir Karakas; Ersin Cavus; Steffen E. Petersen; Sergio Escalera; Santiago Segui; Jose F. Rodriguez Palomares; Karim Lekadir
Title (down) Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge Type Journal Article
Year 2021 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI
Volume 40 Issue 12 Pages 3543-3554
Keywords
Abstract The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ CGI2021 Serial 3653
Permanent link to this record
 

 
Author Qingshan Chen; Zhenzhen Quan; Yifan Hu; Yujun Li; Zhi Liu; Mikhail Mozerov
Title (down) MSIF: multi-spectrum image fusion method for cross-modality person re-identification Type Journal Article
Year 2023 Publication International Journal of Machine Learning and Cybernetics Abbreviated Journal IJMLC
Volume Issue Pages
Keywords
Abstract Sketch-RGB cross-modality person re-identification (ReID) is a challenging task that aims to match a sketch portrait drawn by a professional artist with a full-body photo taken by surveillance equipment to deal with situations where the monitoring equipment is damaged at the accident scene. However, sketch portraits only provide highly abstract frontal body contour information and lack other important features such as color, pose, behavior, etc. The difference in saliency between the two modalities brings new challenges to cross-modality person ReID. To overcome this problem, this paper proposes a novel dual-stream model for cross-modality person ReID, which is able to mine modality-invariant features to reduce the discrepancy between sketch and camera images end-to-end. More specifically, we propose a multi-spectrum image fusion (MSIF) method, which aims to exploit the image appearance changes brought by multiple spectrums and guide the network to mine modality-invariant commonalities during training. It only processes the spectrum of the input images without adding additional calculations and model complexity, which can be easily integrated into other models. Moreover, we introduce a joint structure via a generalized mean pooling (GMP) layer and a self-attention (SA) mechanism to balance background and texture information and obtain the regional features with a large amount of information in the image. To further shrink the intra-class distance, a weighted regularized triplet (WRT) loss is developed without introducing additional hyperparameters. The model was first evaluated on the PKU Sketch ReID dataset, and extensive experimental results show that the Rank-1/mAP accuracy of our method is 87.00%/91.12%, reaching the current state-of-the-art performance. To further validate the effectiveness of our approach in handling cross-modality person ReID, we conducted experiments on two commonly used IR-RGB datasets (SYSU-MM01 and RegDB). The obtained results show that our method achieves competitive performance. These results confirm the ability of our method to effectively process images from different modalities.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP Approved no
Call Number Admin @ si @ CQH2023 Serial 3885
Permanent link to this record
 

 
Author Lluis Gomez; Dimosthenis Karatzas
Title (down) MSER-based Real-Time Text Detection and Tracking Type Conference Article
Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 3110 - 3115
Keywords
Abstract We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the fulldetection approaches in terms of time performance optimization.
A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.
Address Stockholm; August 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN Medium
Area Expedition Conference ICPR
Notes DAG; 600.056; 601.158; 601.197; 600.077 Approved no
Call Number Admin @ si @ GoK2014a Serial 2492
Permanent link to this record
 

 
Author Armin Mehri; Parichehr Behjati Ardakani; Angel Sappa
Title (down) MPRNet: Multi-Path Residual Network for Lightweight Image Super Resolution Type Conference Article
Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages 2703-2712
Keywords
Abstract Lightweight super resolution networks have extremely importance for real-world applications. In recent years several SR deep learning approaches with outstanding achievement have been introduced by sacrificing memory and computational cost. To overcome this problem, a novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR and performs roughly similar to computationally expensive networks. Multi-Path Residual Network designs with a set of Residual concatenation Blocks stacked with Adaptive Residual Blocks: ($i$) to adaptively extract informative features and learn more expressive spatial context information; ($ii$) to better leverage multi-level representations before up-sampling stage; and ($iii$) to allow an efficient information and gradient flow within the network. The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model. Extensive experiments show the superiority of our model against other SOTA SR approaches.
Address Virtual; January 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes MSIAU; 600.130; 600.122 Approved no
Call Number Admin @ si @ MAS2021b Serial 3582
Permanent link to this record
 

 
Author Angel Sappa; David Geronimo; Fadi Dornaika; Mohammad Rouhani; Antonio Lopez
Title (down) Moving object detection from mobile platforms using stereo data registration Type Book Chapter
Year 2012 Publication Computational Intelligence paradigms in advanced pattern classification Abbreviated Journal
Volume 386 Issue Pages 25-37
Keywords pedestrian detection
Abstract This chapter describes a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three main stages. Initially, feature points are extracted and tracked through consecutive 2D frames. Then, a RANSAC based approach is used for registering two point sets, with known correspondences in the 3D space. The computed 3D rigid displacement is used to map two consecutive 3D point clouds into the same coordinate system by means of the quaternion method. Finally, moving objects correspond to those areas with large 3D registration errors. Experimental results show the viability of the proposed approach to detect moving objects like vehicles or pedestrians in different urban scenarios.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor Marek R. Ogiela; Lakhmi C. Jain
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1860-949X ISBN 978-3-642-24048-5 Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ SGD2012 Serial 2061
Permanent link to this record
 

 
Author Ariel Amato; Ivan Huerta; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez
Title (down) Moving Cast Shadows Detection Methods for Video Surveillance Applications Type Book Chapter
Year 2014 Publication Augmented Vision and Reality Abbreviated Journal
Volume 6 Issue Pages 23-47
Keywords
Abstract Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (‘shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows).
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2190-5916 ISBN 978-3-642-37840-9 Medium
Area Expedition Conference
Notes ISE; 605.203; 600.049; 302.018; 302.012; 600.078 Approved no
Call Number Admin @ si @ AHM2014 Serial 2223
Permanent link to this record
 

 
Author Ariel Amato
Title (down) Moving cast shadow detection Type Journal Article
Year 2014 Publication Electronic letters on computer vision and image analysis Abbreviated Journal ELCVIA
Volume 13 Issue 2 Pages 70-71
Keywords
Abstract Motion perception is an amazing innate ability of the creatures on the planet. This adroitness entails a functional advantage that enables species to compete better in the wild. The motion perception ability is usually employed at different levels, allowing from the simplest interaction with the ’physis’ up to the most transcendental survival tasks. Among the five classical perception system , vision is the most widely used in the motion perception field. Millions years of evolution have led to a highly specialized visual system in humans, which is characterized by a tremendous accuracy as well as an extraordinary robustness. Although humans and an immense diversity of species can distinguish moving object with a seeming simplicity, it has proven to be a difficult and non trivial problem from a computational perspective. In the field of Computer Vision, the detection of moving objects is a challenging and fundamental research area. This can be referred to as the ’origin’ of vast and numerous vision-based research sub-areas. Nevertheless, from the bottom to the top of this hierarchical analysis, the foundations still relies on when and where motion has occurred in an image. Pixels corresponding to moving objects in image sequences can be identified by measuring changes in their values. However, a pixel’s value (representing a combination of color and brightness) could also vary due to other factors such as: variation in scene illumination, camera noise and nonlinear sensor responses among others. The challenge lies in detecting if the changes in pixels’ value are caused by a genuine object movement or not. An additional challenging aspect in motion detection is represented by moving cast shadows. The paradox arises because a moving object and its cast shadow share similar motion patterns. However, a moving cast shadow is not a moving object. In fact, a shadow represents a photometric illumination effect caused by the relative position of the object with respect to the light sources. Shadow detection methods are mainly divided in two domains depending on the application field. One normally consists of static images where shadows are casted by static objects, whereas the second one is referred to image sequences where shadows are casted by moving objects. For the first case, shadows can provide additional geometric and semantic cues about shape and position of its casting object as well as the localization of the light source. Although the previous information can be extracted from static images as well as video sequences, the main focus in the second area is usually change detection, scene matching or surveillance. In this context, a shadow can severely affect with the analysis and interpretation of the scene. The work done in the thesis is focused on the second case, thus it addresses the problem of detection and removal of moving cast shadows in video sequences in order to enhance the detection of moving object.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Ama2014 Serial 2870
Permanent link to this record