Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–20] |
Records | |||||
---|---|---|---|---|---|
Author | Akhil Gurram; Onay Urfalioglu; Ibrahim Halfaoui; Fahd Bouzaraa; Antonio Lopez | ||||
Title | Semantic Monocular Depth Estimation Based on Artificial Intelligence | Type | Journal Article | ||
Year | 2020 | Publication | IEEE Intelligent Transportation Systems Magazine | Abbreviated Journal | ITSM |
Volume | 13 | Issue | 4 | Pages | 99-103 |
Keywords | |||||
Abstract | Depth estimation provides essential information to perform autonomous driving and driver assistance. A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels where the same raw training data is associated with both types of ground truth, i.e., depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, i.e., that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on monocular depth estimation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.124; 600.118 | Approved | no | ||
Call Number | Admin @ si @ GUH2019 | Serial | 3306 | ||
Permanent link to this record | |||||
Author | Akshita Gupta; Sanath Narayan; Salman Khan; Fahad Shahbaz Khan; Ling Shao; Joost Van de Weijer | ||||
Title | Generative Multi-Label Zero-Shot Learning | Type | Journal Article | ||
Year | 2023 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 45 | Issue | 12 | Pages | 14611-14624 |
Keywords | Generalized zero-shot learning; Multi-label classification; Zero-shot object detection; Feature synthesis | ||||
Abstract | Multi-label zero-shot learning strives to classify images into multiple unseen categories for which no data is available during training. The test samples can additionally contain seen categories in the generalized variant. Existing approaches rely on learning either shared or label-specific attention from the seen classes. Nevertheless, computing reliable attention maps for unseen classes during inference in a multi-label setting is still a challenge. In contrast, state-of-the-art single-label generative adversarial network (GAN) based approaches learn to directly synthesize the class-specific visual features from the corresponding class attribute embeddings. However, synthesizing multi-label features from GANs is still unexplored in the context of zero-shot setting. When multiple objects occur jointly in a single image, a critical question is how to effectively fuse multi-class information. In this work, we introduce different fusion approaches at the attribute-level, feature-level and cross-level (across attribute and feature-levels) for synthesizing multi-label features from their corresponding multi-label class embeddings. To the best of our knowledge, our work is the first to tackle the problem of multi-label feature synthesis in the (generalized) zero-shot setting. Our cross-level fusion-based generative approach outperforms the state-of-the-art on three zero-shot benchmarks: NUS-WIDE, Open Images and MS COCO. Furthermore, we show the generalization capabilities of our fusion approach in the zero-shot detection task on MS COCO, achieving favorable performance against existing methods. | ||||
Address | December 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; PID2021-128178OB-I00 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3853 | ||
Permanent link to this record | |||||
Author | Albert Ali Salah; E. Pauwels; R. Tavenard; Theo Gevers | ||||
Title | T-Patterns Revisited: Mining for Temporal Patterns in Sensor Data | Type | Journal Article | ||
Year | 2010 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 10 | Issue | 8 | Pages | 7496-7513 |
Keywords | sensor networks; temporal pattern extraction; T-patterns; Lempel-Ziv; Gaussian mixture model; MERL motion data | ||||
Abstract | The trend to use large amounts of simple sensors as opposed to a few complex sensors to monitor places and systems creates a need for temporal pattern mining algorithms to work on such data. The methods that try to discover re-usable and interpretable patterns in temporal event data have several shortcomings. We contrast several recent approaches to the problem, and extend the T-Pattern algorithm, which was previously applied for detection of sequential patterns in behavioural sciences. The temporal complexity of the T-pattern approach is prohibitive in the scenarios we consider. We remedy this with a statistical model to obtain a fast and robust algorithm to find patterns in temporal data. We test our algorithm on a recent database collected with passive infrared sensors with millions of events. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ SPT2010 | Serial | 1845 | ||
Permanent link to this record | |||||
Author | Albert Ali Salah; Theo Gevers; Nicu Sebe; Alessandro Vinciarelli | ||||
Title | Computer Vision for Ambient Intelligence | Type | Journal Article | ||
Year | 2011 | Publication | Journal of Ambient Intelligence and Smart Environments | Abbreviated Journal | JAISE |
Volume | 3 | Issue | 3 | Pages | 187-191 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ SGS2011a | Serial | 1725 | ||
Permanent link to this record | |||||
Author | Albert Clapes; Alex Pardo; Oriol Pujol; Sergio Escalera | ||||
Title | Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly | Type | Journal Article | ||
Year | 2018 | Publication | Machine Vision and Applications | Abbreviated Journal | MVAP |
Volume | 29 | Issue | 5 | Pages | 765–788 |
Keywords | Multimodal activity detection; Computer vision; Inertial sensors; Dense trajectories; Dynamic time warping; Assistive technology | ||||
Abstract | We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF- 2 kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ CPP2018 | Serial | 3125 | ||
Permanent link to this record | |||||
Author | Albert Clapes; Miguel Reyes; Sergio Escalera | ||||
Title | Multi-modal User Identification and Object Recognition Surveillance System | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 34 | Issue | 7 | Pages | 799-808 |
Keywords | Multi-modal RGB-Depth data analysis; User identification; Object recognition; Intelligent surveillance; Visual features; Statistical learning | ||||
Abstract | We propose an automatic surveillance system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized using robust statistical approaches. The system robustly recognizes users and updates the system in an online way, identifying and detecting new actors in the scene. Moreover, segmented objects are described, matched, recognized, and updated online using view-point 3D descriptions, being robust to partial occlusions and local 3D viewpoint rotations. Finally, the system saves the historic of user–object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; 600.046; 605.203;MILAB | Approved | no | ||
Call Number | Admin @ si @ CRE2013 | Serial | 2248 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Alicia Fornes; Ernest Valveny | ||||
Title | Writer identification in handwritten musical scores with bags of notes | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 46 | Issue | 5 | Pages | 1337-1345 |
Keywords | |||||
Abstract | Writer Identification is an important task for the automatic processing of documents. However, the identification of the writer in graphical documents is still challenging. In this work, we adapt the Bag of Visual Words framework to the task of writer identification in handwritten musical scores. A vanilla implementation of this method already performs comparably to the state-of-the-art. Furthermore, we analyze the effect of two improvements of the representation: a Bhattacharyya embedding, which improves the results at virtually no extra cost, and a Fisher Vector representation that very significantly improves the results at the cost of a more complex and costly representation. Experimental evaluation shows results more than 20 points above the state-of-the-art in a new, challenging dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ GFV2013 | Serial | 2307 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Florent Perronnin; Ernest Valveny | ||||
Title | Large-scale document image retrieval and classification with runlength histograms and binary embeddings | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 46 | Issue | 7 | Pages | 1898-1905 |
Keywords | visual document descriptor; compression; large-scale; retrieval; classification | ||||
Abstract | We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be computed efficiently. We show how this descriptor can achieve state-of-theart results on two very different public datasets in classification and retrieval tasks. Moreover, we show how we can compress and binarize these descriptors to make them suitable for large-scale applications. We can achieve state-ofthe- art results in classification using binary descriptors of as few as 16 to 64 bits. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.042; 600.045; 605.203 | Approved | no | ||
Call Number | Admin @ si @ GPV2013 | Serial | 2306 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Florent Perronnin; Yunchao Gong; Svetlana Lazebnik | ||||
Title | Asymmetric Distances for Binary Embeddings | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 1 | Pages | 33-47 |
Keywords | |||||
Abstract | In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes which binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances which are applicable to a wide variety of embedding techniques including Locality Sensitive Hashing (LSH), Locality Sensitive Binary Codes (LSBC), Spectral Hashing (SH), PCA Embedding (PCAE), PCA Embedding with random rotations (PCAE-RR), and PCA Embedding with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.045; 605.203; 600.077 | Approved | no | ||
Call Number | Admin @ si @ GPG2014 | Serial | 2272 | ||
Permanent link to this record | |||||
Author | Alberto Hidalgo; Ferran Poveda; Enric Marti;Debora Gil;Albert Andaluz; Francesc Carreras; Manuel Ballester | ||||
Title | Evidence of continuous helical structure of the cardiac ventricular anatomy assessed by diffusion tensor imaging magnetic resonance multiresolution tractography | Type | Journal Article | ||
Year | 2012 | Publication | European Radiology | Abbreviated Journal | ECR |
Volume | 3 | Issue | 1 | Pages | 361-362 |
Keywords | |||||
Abstract | Deep understanding of myocardial structure linking morphology and func- tion of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Diffusion tensor MRI provides a discrete measurement of the 3D arrangement of myocardial fibres by the observation of local anisotropic
diffusion of water molecules in biological tissues. In this work, we present a multi- scale visualisation technique based on DT-MRI streamlining capable of uncovering additional properties of the architectural organisation of the heart. Methods and Materials: We selected the John Hopkins University (JHU) Canine Heart Dataset, where the long axis cardiac plane is aligned with the scanner’s Z- axis. Their equipment included a 4-element passed array coil emitting a 1.5 T. For DTI acquisition, a 3D-FSE sequence is apply. We used 200 seeds for full-scale tractography, while we applied a MIP mapping technique for simplified tractographic reconstruction. In this case, we reduced each DTI 3D volume dimensions by order- two magnitude before streamlining. Our simplified tractographic reconstruction method keeps the main geometric features of fibres, allowing for an easier identification of their global morphological disposition, including the ventricular basal ring. Moreover, we noticed a clearly visible helical disposition of the myocardial fibres, in line with the helical myocardial band ventricular structure described by Torrent-Guasp. Finally, our simplified visualisation with single tracts identifies the main segments of the helical ventricular architecture. DT-MRI makes possible the identification of a continuous helical architecture of the myocardial fibres, which validates Torrent-Guasp’s helical myocardial band ventricular anatomical model. |
||||
Address | Viena, Austria | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Link | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1869-4101 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | IAM | Approved | no | ||
Call Number | IAM @ iam @ HPM2012 | Serial | 1858 | ||
Permanent link to this record | |||||
Author | Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli | ||||
Title | Batch-based activity recognition from egocentric photo-streams revisited | Type | Journal Article | ||
Year | 2018 | Publication | Pattern Analysis and Applications | Abbreviated Journal | PAA |
Volume | 21 | Issue | 4 | Pages | 953–965 |
Keywords | Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks | ||||
Abstract | Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ CMR2018 | Serial | 3186 | ||
Permanent link to this record | |||||
Author | Alejandro Cartas; Petia Radeva; Mariella Dimiccoli | ||||
Title | Activities of Daily Living Monitoring via a Wearable Camera: Toward Real-World Applications | Type | Journal Article | ||
Year | 2020 | Publication | IEEE Access | Abbreviated Journal | ACCESS |
Volume | 8 | Issue | Pages | 77344 - 77363 | |
Keywords | |||||
Abstract | Activity recognition from wearable photo-cameras is crucial for lifestyle characterization and health monitoring. However, to enable its wide-spreading use in real-world applications, a high level of generalization needs to be ensured on unseen users. Currently, state-of-the-art methods have been tested only on relatively small datasets consisting of data collected by a few users that are partially seen during training. In this paper, we built a new egocentric dataset acquired by 15 people through a wearable photo-camera and used it to test the generalization capabilities of several state-of-the-art methods for egocentric activity recognition on unseen users and daily image sequences. In addition, we propose several variants to state-of-the-art deep learning architectures, and we show that it is possible to achieve 79.87% accuracy on users unseen during training. Furthermore, to show that the proposed dataset and approach can be useful in real-world applications, where data can be acquired by different wearable cameras and labeled data are scarcely available, we employed a domain adaptation strategy on two egocentric activity recognition benchmark datasets. These experiments show that the model learned with our dataset, can easily be transferred to other domains with a very small amount of labeled data. Taken together, those results show that activity recognition from wearable photo-cameras is mature enough to be tested in real-world applications. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ CRD2020 | Serial | 3436 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; David Vazquez; Antonio Lopez; Jaume Amores | ||||
Title | On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | Cyber |
Volume | 47 | Issue | 11 | Pages | 3980 - 3990 |
Keywords | Multicue; multimodal; multiview; object detection | ||||
Abstract | Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.085; 600.082; 600.076; 600.118 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 2810 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Zhijie Fang; Yainuvis Socarras; Joan Serrat; David Vazquez; Jiaolong Xu; Antonio Lopez | ||||
Title | Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison | Type | Journal Article | ||
Year | 2016 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 16 | Issue | 6 | Pages | 820 |
Keywords | Pedestrian Detection; FIR | ||||
Abstract | Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and night time. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images, (b) just infrared images and (c) both of them. In order to obtain results for the last item we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset we have built for this purpose as well as on the publicly available KAIST multispectral dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1424-8220 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.085; 600.076; 600.082; 601.281 | Approved | no | ||
Call Number | ADAS @ adas @ GFS2016 | Serial | 2754 | ||
Permanent link to this record | |||||
Author | Alex Gomez-Villa; Adrian Martin; Javier Vazquez; Marcelo Bertalmio; Jesus Malo | ||||
Title | On the synthesis of visual illusions using deep generative models | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Vision | Abbreviated Journal | JOV |
Volume | 22(8) | Issue | 2 | Pages | 1-18 |
Keywords | |||||
Abstract | Visual illusions expand our understanding of the visual system by imposing constraints in the models in two different ways: i) visual illusions for humans should induce equivalent illusions in the model, and ii) illusions synthesized from the model should be compelling for human viewers too. These constraints are alternative strategies to find good vision models. Following the first research strategy, recent studies have shown that artificial neural network architectures also have human-like illusory percepts when stimulated with classical hand-crafted stimuli designed to fool humans. In this work we focus on the second (less explored) strategy: we propose a framework to synthesize new visual illusions using the optimization abilities of current automatic differentiation techniques. The proposed framework can be used with classical vision models as well as with more recent artificial neural network architectures. This framework, validated by psychophysical experiments, can be used to study the difference between a vision model and the actual human perception and to optimize the vision model to decrease this difference. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.161; 611.007 | Approved | no | ||
Call Number | Admin @ si @ GMV2022 | Serial | 3682 | ||
Permanent link to this record |