%0 Thesis %T Context, Motion and Semantic Information for Computational Saliency %A Aymen Azaza %E Joost Van de Weijer %E Ali Douik %D 2018 %I Ediciones Graficas Rey %@ 978-84-945373-9-4 %F Aymen Azaza2018 %O LAMP; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3218), last updated on Fri, 07 Jan 2022 12:10:56 +0100 %X The main objective of this thesis is to highlight the salient object in an image or in a video sequence. We address three important—but in our opinioninsufficiently investigated—aspects of saliency detection. Firstly, we startby extending previous research on saliency which explicitly models the information provided from the context. Then, we show the importance ofexplicit context modelling for saliency estimation. Several important worksin saliency are based on the usage of object proposals. However, these methodsfocus on the saliency of the object proposal itself and ignore the context.To introduce context in such saliency approaches, we couple every objectproposal with its direct context. This allows us to evaluate the importanceof the immediate surround (context) for its saliency. We propose severalsaliency features which are computed from the context proposals includingfeatures based on omni-directional and horizontal context continuity. Secondly,we investigate the usage of top-downmethods (high-level semanticinformation) for the task of saliency prediction since most computationalmethods are bottom-up or only include few semantic classes. We proposeto consider a wider group of object classes. These objects represent importantsemantic information which we will exploit in our saliency predictionapproach. Thirdly, we develop a method to detect video saliency by computingsaliency from supervoxels and optical flow. In addition, we apply thecontext features developed in this thesis for video saliency detection. Themethod combines shape and motion features with our proposed contextfeatures. To summarize, we prove that extending object proposals with theirdirect context improves the task of saliency detection in both image andvideo data. Also the importance of the semantic information in saliencyestimation is evaluated. Finally, we propose a newmotion feature to detectsaliency in video data. The three proposed novelties are evaluated on standardsaliency benchmark datasets and are shown to improve with respect tostate-of-the-art. %9 theses %9 Ph.D. thesis