Home | [61–70] << 71 72 73 74 75 76 77 78 79 80 >> [81–90] |
Records | |||||
---|---|---|---|---|---|
Author | Jaume Amores | ||||
Title | Multiple Instance Classification: review, taxonomy and comparative study | Type | Journal Article | ||
Year | 2013 | Publication | Artificial Intelligence | Abbreviated Journal | AI |
Volume | 201 | Issue | Pages | 81-105 | |
Keywords | Multi-instance learning; Codebook; Bag-of-Words | ||||
Abstract | Multiple Instance Learning (MIL) has become an important topic in the pattern recognition community, and many solutions to this problemhave been proposed until now. Despite this fact, there is a lack of comparative studies that shed light into the characteristics and behavior of the different methods. In this work we provide such an analysis focused on the classification task (i.e.,leaving out other learning tasks such as regression). In order to perform our study, we implemented
fourteen methods grouped into three different families. We analyze the performance of the approaches across a variety of well-known databases, and we also study their behavior in synthetic scenarios in order to highlight their characteristics. As a result of this analysis, we conclude that methods that extract global bag-level information show a clearly superior performance in general. In this sense, the analysis permits us to understand why some types of methods are more successful than others, and it permits us to establish guidelines in the design of new MIL methods. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier Science Publishers Ltd. Essex, UK | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0004-3702 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 601.042; 600.057 | Approved | no | ||
Call Number | Admin @ si @ Amo2013 | Serial | 2273 | ||
Permanent link to this record | |||||
Author | Jaume Amores | ||||
Title | Vocabulary-based Approaches for Multiple-Instance Data: a Comparative Study | Type | Conference Article | ||
Year | 2010 | Publication | 20th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 4246–4250 | ||
Keywords | |||||
Abstract | Multiple Instance Learning (MIL) has become a hot topic and many different algorithms have been proposed in the last years. Despite this fact, there is a lack of comparative studies that shed light into the characteristics of the different methods and their behavior in different scenarios. In this paper we provide such an analysis. We include methods from different families, and pay special attention to vocabulary-based approaches, a new family of methods that has not received much attention in the MIL literature. The empirical comparison includes seven databases from four heterogeneous domains, implementations of eight popular MIL methods, and a study of the behavior under synthetic conditions. Based on this analysis, we show that, with an appropriate implementation, vocabulary-based approaches outperform other MIL methods in most of the cases, showing in general a more consistent performance. | ||||
Address | Istanbul, Turkey | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1051-4651 | ISBN | 978-1-4244-7542-1 | Medium | |
Area | Expedition | Conference | ICPR | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ Amo2010 | Serial | 1295 | ||
Permanent link to this record | |||||
Author | Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera | ||||
Title | Generic Subclass Ensemble: A Novel Approach to Ensemble Classification | Type | Conference Article | ||
Year | 2014 | Publication | 22nd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1254 - 1259 | ||
Keywords | |||||
Abstract | Multiple classifier systems, also known as classifier ensembles, have received great attention in recent years because of their improved classification accuracy in different applications. In this paper, we propose a new general approach to ensemble classification, named generic subclass ensemble, in which each base classifier is trained with data belonging to a subset of classes, and thus discriminates among a subset of target categories. The ensemble classifiers are then fused using a combination rule. The proposed approach differs from existing methods that manipulate the target attribute, since in our approach individual classification problems are not restricted to two-class problems. We perform a series of experiments to evaluate the efficiency of the generic subclass approach on a set of benchmark datasets. Experimental results with multilayer perceptrons show that the proposed approach presents a viable alternative to the most commonly used ensemble classification approaches. | ||||
Address | Stockholm; August 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1051-4651 | ISBN | Medium | ||
Area | Expedition | Conference | ICPR | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ BGE2014b | Serial | 2445 | ||
Permanent link to this record | |||||
Author | Kai Wang; Luis Herranz; Joost Van de Weijer | ||||
Title | Continual learning in cross-modal retrieval | Type | Conference Article | ||
Year | 2021 | Publication | 2nd CLVISION workshop | Abbreviated Journal | |
Volume | Issue | Pages | 3628-3638 | ||
Keywords | |||||
Abstract | Multimodal representations and continual learning are two areas closely related to human intelligence. The former considers the learning of shared representation spaces where information from different modalities can be compared and integrated (we focus on cross-modal retrieval between language and visual representations). The latter studies how to prevent forgetting a previously learned task when learning a new one. While humans excel in these two aspects, deep neural networks are still quite limited. In this paper, we propose a combination of both problems into a continual cross-modal retrieval setting, where we study how the catastrophic interference caused by new tasks impacts the embedding spaces and their cross-modal alignment required for effective retrieval. We propose a general framework that decouples the training, indexing and querying stages. We also identify and study different factors that may lead to forgetting, and propose tools to alleviate it. We found that the indexing stage pays an important role and that simply avoiding reindexing the database with updated embedding networks can lead to significant gains. We evaluated our methods in two image-text retrieval datasets, obtaining significant gains with respect to the fine tuning baseline. | ||||
Address | Virtual; June 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | LAMP; 600.120; 600.141; 600.147; 601.379 | Approved | no | ||
Call Number | Admin @ si @ WHW2021 | Serial | 3566 | ||
Permanent link to this record | |||||
Author | Souhail Bakkali; Zuheng Ming; Mickael Coustaty; Marçal Rusiñol; Oriol Ramos Terrades | ||||
Title | VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification | Type | Journal Article | ||
Year | 2023 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 139 | Issue | Pages | 109419 | |
Keywords | |||||
Abstract | Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISSN 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.140; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BMC2023 | Serial | 3826 | ||
Permanent link to this record | |||||
Author | Gisel Bastidas-Guacho; Patricio Moreno; Boris X. Vintimilla; Angel Sappa | ||||
Title | Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches | Type | Conference Article | ||
Year | 2023 | Publication | 13th International Conference on Pattern Recognition Systems | Abbreviated Journal | |
Volume | 14234 | Issue | Pages | 25–36 | |
Keywords | |||||
Abstract | Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion. | ||||
Address | Guayaquil; Ecuador; July 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRS | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ BMV2023 | Serial | 3932 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Angel Sappa; Riad I. Hammoud | ||||
Title | Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Images | Type | Journal Article | ||
Year | 2018 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 18 | Issue | 7 | Pages | 2059 |
Keywords | RGB-NIR sensor; multispectral imaging; deep learning; CNNs | ||||
Abstract | Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm).
This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different scenarios and using different similarity metrics. Both of them improve the state of the art approaches. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; MSIAU; 600.086; 600.130; 600.122; 600.118 | Approved | no | ||
Call Number | Admin @ si @ SSH2018 | Serial | 3145 | ||
Permanent link to this record | |||||
Author | David Geronimo; Frederic Lerasle; Antonio Lopez | ||||
Title | State-driven particle filter for multi-person tracking | Type | Conference Article | ||
Year | 2012 | Publication | 11th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 7517 | Issue | Pages | 467-478 | |
Keywords | human tracking | ||||
Abstract | Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test it in public video sequences. |
||||
Address | Brno, Chzech Republic | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Heidelberg | Editor | J. Blanc-Talon et al. |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACIVS | ||
Notes | ADAS | Approved | yes | ||
Call Number | GLL2012; ADAS @ adas @ gll2012a | Serial | 1990 | ||
Permanent link to this record | |||||
Author | Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde | ||||
Title | Multi-scale decomposition-based CT-MR neurological image fusion using optimized bio-inspired spiking neural model with meta-heuristic optimization | Type | Journal Article | ||
Year | 2021 | Publication | International Journal of Imaging Systems and Technology | Abbreviated Journal | IMA |
Volume | 31 | Issue | 4 | Pages | 2170-2188 |
Keywords | |||||
Abstract | Multi-modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography-magnetic resonance (CT-MR) image fusion model is proposed using an optimized bio-inspired spiking feedforward neural network in different decomposition domains. First, source images are decomposed into base (low-frequency) and detail (high-frequency) layer components. Low-frequency subbands are fused using texture energy measures to capture the local energy, contrast, and small edges in the fused image. High-frequency coefficients are fused using firing maps obtained by pixel-activated neural model with the optimized parameters using three different optimization techniques such as differential evolution, cuckoo search, and gray wolf optimization, individually. In the optimization model, a fitness function is computed based on the edge index of resultant fused images, which helps to extract and preserve sharp edges available in the source CT and MR images. To validate the fusion performance, a detailed comparative analysis is presented among the proposed and state-of-the-art methods in terms of quantitative and qualitative measures along with computational complexity. Experimental results show that the proposed method produces a significantly better visual quality of fused images meanwhile outperforms the existing methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no menciona | Approved | no | ||
Call Number | Admin @ si @ DGR2021a | Serial | 3630 | ||
Permanent link to this record | |||||
Author | Akshita Gupta; Sanath Narayan; Salman Khan; Fahad Shahbaz Khan; Ling Shao; Joost Van de Weijer | ||||
Title | Generative Multi-Label Zero-Shot Learning | Type | Journal Article | ||
Year | 2023 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 45 | Issue | 12 | Pages | 14611-14624 |
Keywords | Generalized zero-shot learning; Multi-label classification; Zero-shot object detection; Feature synthesis | ||||
Abstract | Multi-label zero-shot learning strives to classify images into multiple unseen categories for which no data is available during training. The test samples can additionally contain seen categories in the generalized variant. Existing approaches rely on learning either shared or label-specific attention from the seen classes. Nevertheless, computing reliable attention maps for unseen classes during inference in a multi-label setting is still a challenge. In contrast, state-of-the-art single-label generative adversarial network (GAN) based approaches learn to directly synthesize the class-specific visual features from the corresponding class attribute embeddings. However, synthesizing multi-label features from GANs is still unexplored in the context of zero-shot setting. When multiple objects occur jointly in a single image, a critical question is how to effectively fuse multi-class information. In this work, we introduce different fusion approaches at the attribute-level, feature-level and cross-level (across attribute and feature-levels) for synthesizing multi-label features from their corresponding multi-label class embeddings. To the best of our knowledge, our work is the first to tackle the problem of multi-label feature synthesis in the (generalized) zero-shot setting. Our cross-level fusion-based generative approach outperforms the state-of-the-art on three zero-shot benchmarks: NUS-WIDE, Open Images and MS COCO. Furthermore, we show the generalization capabilities of our fusion approach in the zero-shot detection task on MS COCO, achieving favorable performance against existing methods. | ||||
Address | December 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; PID2021-128178OB-I00 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3853 | ||
Permanent link to this record | |||||
Author | Vacit Oguz Yazici; Joost Van de Weijer; Longlong Yu | ||||
Title | Visual Transformers with Primal Object Queries for Multi-Label Image Classification | Type | Conference Article | ||
Year | 2022 | Publication | 26th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Multi-label image classification is about predicting a set of class labels that can be considered as orderless sequential data. Transformers process the sequential data as a whole, therefore they are inherently good at set prediction. The first vision-based transformer model, which was proposed for the object detection task introduced the concept of object queries. Object queries are learnable positional encodings that are used by attention modules in decoder layers to decode the object classes or bounding boxes using the region of interests in an image. However, inputting the same set of object queries to different decoder layers hinders the training: it results in lower performance and delays convergence. In this paper, we propose the usage of primal object queries that are only provided at the start of the transformer decoder stack. In addition, we improve the mixup technique proposed for multi-label classification. The proposed transformer model with primal object queries improves the state-of-the-art class wise F1 metric by 2.1% and 1.8%; and speeds up the convergence by 79.0% and 38.6% on MS-COCO and NUS-WIDE datasets respectively. | ||||
Address | Montreal; Quebec; Canada; August 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | LAMP; 600.147; 601.309 | Approved | no | ||
Call Number | Admin @ si @ YWY2022 | Serial | 3786 | ||
Permanent link to this record | |||||
Author | Alejandro Ariza-Casabona; Bartlomiej Twardowski; Tri Kurniawan Wijaya | ||||
Title | Exploiting Graph Structured Cross-Domain Representation for Multi-domain Recommendation | Type | Conference Article | ||
Year | 2023 | Publication | European Conference on Information Retrieval – ECIR 2023: Advances in Information Retrieval | Abbreviated Journal | |
Volume | 13980 | Issue | Pages | 49–65 | |
Keywords | |||||
Abstract | Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer. Both can be achieved by introducing a specific modeling of input data (i.e. disjoint history) or trying dedicated training regimes. At the same time, treating domains as separate input sources becomes a limitation as it does not capture the interplay that naturally exists between domains. In this work, we efficiently learn multi-domain representation of sequential users’ interactions using graph neural networks. We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec (short for Multi-dom Ain Graph-based Recommender). To better capture all relations in a multi-domain setting, we learn two graph-based sequential representations simultaneously: domain-guided for recent user interest, and general for long-term interest. This approach helps to mitigate the negative knowledge transfer problem from multiple domains and improve overall representation. We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods. Furthermore, we provide an ablation study and discuss further extensions of our method. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECIR | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ ATK2023 | Serial | 3933 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Alicia Fornes; Oriol Pujol; Petia Radeva | ||||
Title | Multi-class Binary Symbol Classification with Circular Blurred Shape Models | Type | Conference Article | ||
Year | 2009 | Publication | 15th International Conference on Image Analysis and Processing | Abbreviated Journal | |
Volume | 5716 | Issue | Pages | 1005–1014 | |
Keywords | |||||
Abstract | Multi-class binary symbol classification requires the use of rich descriptors and robust classifiers. Shape representation is a difficult task because of several symbol distortions, such as occlusions, elastic deformations, gaps or noise. In this paper, we present the Circular Blurred Shape Model descriptor. This descriptor encodes the arrangement information of object parts in a correlogram structure. A prior blurring degree defines the level of distortion allowed to the symbol. Moreover, we learn the new feature space using a set of Adaboost classifiers, which are combined in the Error-Correcting Output Codes framework to deal with the multi-class categorization problem. The presented work has been validated over different multi-class data sets, and compared to the state-of-the-art descriptors, showing significant performance improvements. | ||||
Address | Salerno, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-04145-7 | Medium | |
Area | Expedition | Conference | ICIAP | ||
Notes | MILAB;HuPBA;DAG | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EFP2009c | Serial | 1186 | ||
Permanent link to this record | |||||
Author | Olivier Penacchio; Xavier Otazu; Arnold J Wilkings; Sara M. Haigh | ||||
Title | A mechanistic account of visual discomfort | Type | Journal Article | ||
Year | 2023 | Publication | Frontiers in Neuroscience | Abbreviated Journal | FN |
Volume | 17 | Issue | Pages | ||
Keywords | |||||
Abstract | Much of the neural machinery of the early visual cortex, from the extraction of local orientations to contextual modulations through lateral interactions, is thought to have developed to provide a sparse encoding of contour in natural scenes, allowing the brain to process efficiently most of the visual scenes we are exposed to. Certain visual stimuli, however, cause visual stress, a set of adverse effects ranging from simple discomfort to migraine attacks, and epileptic seizures in the extreme, all phenomena linked with an excessive metabolic demand. The theory of efficient coding suggests a link between excessive metabolic demand and images that deviate from natural statistics. Yet, the mechanisms linking energy demand and image spatial content in discomfort remain elusive. Here, we used theories of visual coding that link image spatial structure and brain activation to characterize the response to images observers reported as uncomfortable in a biologically based neurodynamic model of the early visual cortex that included excitatory and inhibitory layers to implement contextual influences. We found three clear markers of aversive images: a larger overall activation in the model, a less sparse response, and a more unbalanced distribution of activity across spatial orientations. When the ratio of excitation over inhibition was increased in the model, a phenomenon hypothesised to underlie interindividual differences in susceptibility to visual discomfort, the three markers of discomfort progressively shifted toward values typical of the response to uncomfortable stimuli. Overall, these findings propose a unifying mechanistic explanation for why there are differences between images and between observers, suggesting how visual input and idiosyncratic hyperexcitability give rise to abnormal brain responses that result in visual stress. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT | Approved | no | ||
Call Number | Admin @ si @ POW2023 | Serial | 3886 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Ivan Huerta; Mikhail Mozerov; Xavier Roca; Jordi Gonzalez | ||||
Title | Moving Cast Shadows Detection Methods for Video Surveillance Applications | Type | Book Chapter | ||
Year | 2014 | Publication | Augmented Vision and Reality | Abbreviated Journal | |
Volume | 6 | Issue | Pages | 23-47 | |
Keywords | |||||
Abstract | Moving cast shadows are a major concern in today’s performance from broad range of many vision-based surveillance applications because they highly difficult the object classification task. Several shadow detection methods have been reported in the literature during the last years. They are mainly divided into two domains. One usually works with static images, whereas the second one uses image sequences, namely video content. In spite of the fact that both cases can be analogously analyzed, there is a difference in the application field. The first case, shadow detection methods can be exploited in order to obtain additional geometric and semantic cues about shape and position of its casting object (‘shape from shadows’) as well as the localization of the light source. While in the second one, the main purpose is usually change detection, scene matching or surveillance (usually in a background subtraction context). Shadows can in fact modify in a negative way the shape and color of the target object and therefore affect the performance of scene analysis and interpretation in many applications. This chapter wills mainly reviews shadow detection methods as well as their taxonomies related with the second case, thus aiming at those shadows which are associated with moving objects (moving shadows). | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2190-5916 | ISBN | 978-3-642-37840-9 | Medium | |
Area | Expedition | Conference | |||
Notes | ISE; 605.203; 600.049; 302.018; 302.012; 600.078 | Approved | no | ||
Call Number | Admin @ si @ AHM2014 | Serial | 2223 | ||
Permanent link to this record |