Home | [21–30] << 31 32 33 34 35 36 37 38 39 40 >> [41–50] |
Records | |||||
---|---|---|---|---|---|
Author | Laura Lopez-Fuentes; Joost Van de Weijer; Manuel Gonzalez-Hidalgo; Harald Skinnemoen; Andrew Bagdanov | ||||
Title | Review on computer vision techniques in emergency situations | Type | Journal Article | ||
Year | 2018 | Publication | Multimedia Tools and Applications | Abbreviated Journal | MTAP |
Volume | 77 | Issue | 13 | Pages | 17069–17107 |
Keywords | Emergency management; Computer vision; Decision makers; Situational awareness; Critical situation | ||||
Abstract | In emergency situations, actions that save lives and limit the impact of hazards are crucial. In order to act, situational awareness is needed to decide what to do. Geolocalized photos and video of the situations as they evolve can be crucial in better understanding them and making decisions faster. Cameras are almost everywhere these days, either in terms of smartphones, installed CCTV cameras, UAVs or others. However, this poses challenges in big data and information overflow. Moreover, most of the time there are no disasters at any given location, so humans aiming to detect sudden situations may not be as alert as needed at any point in time. Consequently, computer vision tools can be an excellent decision support. The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research. Researchers tend to focus on state-of-the-art systems that cover the same emergency as they are studying, obviating important research in other fields. In order to unveil this overlap, the survey is divided along four main axes: the types of emergencies that have been studied in computer vision, the objective that the algorithms can address, the type of hardware needed and the algorithms used. Therefore, this review provides a broad overview of the progress of computer vision covering all sorts of emergencies. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.068; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LWG2018 | Serial | 3041 | ||
Permanent link to this record | |||||
Author | Debora Gil; Aura Hernandez-Sabate; Antoni Carol; Oriol Rodriguez; Petia Radeva | ||||
Title | A Deterministic-Statistic Adventitia Detection in IVUS Images | Type | Conference Article | ||
Year | 2005 | Publication | ESC Congress | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Electron microscopy; Unbending; 2D crystal; Interpolation; Approximation | ||||
Abstract | Plaque analysis in IVUS planes needs accurate intima and adventitia models. Large variety in adventitia descriptors difficulties its detection and motivates using a classification strategy for selecting points on the structure. Whatever the set of descriptors used, the selection stage suffers from fake responses due to noise and uncompleted true curves. In order to smooth background noise while strengthening responses, we apply a restricted anisotropic filter that homogenizes grey levels along the image significant structures. Candidate points are extracted by means of a simple semi supervised adaptive classification of the filtered image response to edge and calcium detectors. The final model is obtained by interpolating the former line segments with an anisotropic contour closing technique based on functional extension principles. | ||||
Address | Stockholm; Sweden; September 2005 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | ,Sweden (EU) | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ESC | ||
Notes | IAM;MILAB | Approved | no | ||
Call Number | IAM @ iam @ RMF2005a | Serial | 1523 | ||
Permanent link to this record | |||||
Author | Debora Gil; Aura Hernandez-Sabate; Antoni Carol; Oriol Rodriguez; Petia Radeva | ||||
Title | A Deterministic-Statistic Adventitia Detection in IVUS Images | Type | Conference Article | ||
Year | 2005 | Publication | 3rd International workshop on International Workshop on Functional Imaging and Modeling of the Heart | Abbreviated Journal | |
Volume | Issue | Pages | 65-74 | ||
Keywords | Electron microscopy; Unbending; 2D crystal; Interpolation; Approximation | ||||
Abstract | Plaque analysis in IVUS planes needs accurate intima and adventitia models. Large variety in adventitia descriptors difficulties its detection and motivates using a classification strategy for selecting points on the structure. Whatever the set of descriptors used, the selection stage suffers from fake responses due to noise and uncompleted true curves. In order to smooth background noise while strengthening responses, we apply a restricted anisotropic filter that homogenizes grey levels along the image significant structures. Candidate points are extracted by means of a simple semi supervised adaptive classification of the filtered image response to edge and calcium detectors. The final model is obtained by interpolating the former line segments with an anisotropic contour closing technique based on functional extension principles. | ||||
Address | Barcelona; June 2005 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FIMH | ||
Notes | IAM;MILAB | Approved | no | ||
Call Number | IAM @ iam @ RMF2005 | Serial | 1524 | ||
Permanent link to this record | |||||
Author | Debora Gil; Jose Maria-Carazo; Roberto Marabini | ||||
Title | On the nature of 2D crystal unbending | Type | Journal Article | ||
Year | 2006 | Publication | Journal of Structural Biology | Abbreviated Journal | |
Volume | 156 | Issue | 3 | Pages | 546-555 |
Keywords | Electron microscopy | ||||
Abstract | Crystal unbending, the process that aims to recover a perfect crystal from experimental data, is one of the more important steps in electron crystallography image processing. The unbending process involves three steps: estimation of the unit cell displacements from their ideal positions, extension of the deformation field to the whole image and transformation of the image in order to recover an ideal crystal. In this work, we present a systematic analysis of the second step oriented to address two issues. First, whether the unit cells remain undistorted and only the distance between them should be changed (rigid case) or should be modified with the same deformation suffered by the whole crystal (elastic case). Second, the performance of different extension algorithms (interpolation versus approximation) is explored. Our experiments show that there is no difference between elastic and rigid cases or among the extension algorithms. This implies that the deformation fields are constant over large areas. Furthermore, our results indicate that the main source of error is the transformation of the crystal image. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1047-8477 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | IAM; | Approved | no | ||
Call Number | IAM @ iam @ GCM2006 | Serial | 1519 | ||
Permanent link to this record | |||||
Author | T. Mouats; N. Aouf; Angel Sappa; Cristhian A. Aguilera-Carrasco; Ricardo Toledo | ||||
Title | Multi-Spectral Stereo Odometry | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 16 | Issue | 3 | Pages | 1210-1224 |
Keywords | Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery | ||||
Abstract | In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ MAS2015a | Serial | 2533 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; R. Mestre; Estefania Talavera; Xavier Giro; Petia Radeva | ||||
Title | Visual Summary of Egocentric Photostreams by Representative Keyframes | Type | Conference Article | ||
Year | 2015 | Publication | IEEE International Conference on Multimedia and Expo ICMEW2015 | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | egocentric; lifelogging; summarization; keyframes | ||||
Abstract | Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted bymeans of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the
summaries. |
||||
Address | Torino; italy; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | 978-1-4799-7079-7 | Edition | ||
ISSN | ISBN | 978-1-4799-7079-7 | Medium | ||
Area | Expedition | Conference | ICME | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ BMT2015 | Serial | 2638 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Sergi Solera; Petia Radeva | ||||
Title | Egocentric video description based on temporally-linked sequences | Type | Journal Article | ||
Year | 2018 | Publication | Journal of Visual Communication and Image Representation | Abbreviated Journal | JVCIR |
Volume | 50 | Issue | Pages | 205-216 | |
Keywords | egocentric vision; video description; deep learning; multi-modal learning | ||||
Abstract | Egocentric vision consists in acquiring images along the day from a first person point-of-view using wearable cameras. The automatic analysis of this information allows to discover daily patterns for improving the quality of life of the user. A natural topic that arises in egocentric vision is storytelling, that is, how to understand and tell the story relying behind the pictures.
In this paper, we tackle storytelling as an egocentric sequences description problem. We propose a novel methodology that exploits information from temporally neighboring events, matching precisely the nature of egocentric sequences. Furthermore, we present a new method for multimodal data fusion consisting on a multi-input attention recurrent network. We also release the EDUB-SegDesc dataset. This is the first dataset for egocentric image sequences description, consisting of 1,339 events with 3,991 descriptions, from 55 days acquired by 11 people. Finally, we prove that our proposal outperforms classical attentional encoder-decoder methods for video description. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ BPC2018 | Serial | 3109 | ||
Permanent link to this record | |||||
Author | Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli | ||||
Title | Batch-based activity recognition from egocentric photo-streams revisited | Type | Journal Article | ||
Year | 2018 | Publication | Pattern Analysis and Applications | Abbreviated Journal | PAA |
Volume | 21 | Issue | 4 | Pages | 953–965 |
Keywords | Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks | ||||
Abstract | Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ CMR2018 | Serial | 3186 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Angel Sappa | ||||
Title | Improving Edge Detection in RGB Images by Adding NIR Channel | Type | Conference Article | ||
Year | 2018 | Publication | 14th IEEE International Conference on Signal Image Technology & Internet Based System | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Edge detection; Contour detection; VGG; CNN; RGB-NIR; Near infrared images | ||||
Abstract | The edge detection is yet a critical problem in many computer vision and image processing tasks. The manuscript presents an Holistically-Nested Edge Detection based approach to study the inclusion of Near-Infrared in the Visible spectrum
images. To do so, a Single Sensor based dataset has been acquired in the range of 400nm to 1100nm wavelength spectral band. Prominent results have been obtained even when the ground truth (annotated edge-map) is based in the visible wavelength spectrum. |
||||
Address | Las Palmas de Gran Canaria; November 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SITIS | ||
Notes | MSIAU; 600.122 | Approved | no | ||
Call Number | Admin @ si @ SoS2018 | Serial | 3192 | ||
Permanent link to this record | |||||
Author | David Roche; Debora Gil; Jesus Giraldo | ||||
Title | Detecting loss of diversity for an efficient termination of EAs | Type | Conference Article | ||
Year | 2013 | Publication | 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing | Abbreviated Journal | |
Volume | Issue | Pages | 561 - 566 | ||
Keywords | EA termination; EA population diversity; EA steady state | ||||
Abstract | Termination of Evolutionary Algorithms (EA) at its steady state so that useless iterations are not performed is a main point for its efficient application to black-box problems. Many EA algorithms evolve while there is still diversity in their population and, thus, they could be terminated by analyzing the behavior some measures of EA population diversity. This paper presents a numeric approximation to steady states that can be used to detect the moment EA population has lost its diversity for EA termination. Our condition has been applied to 3 EA paradigms based on diversity and a selection of functions
covering the properties most relevant for EA convergence. Experiments show that our condition works regardless of the search space dimension and function landscape. |
||||
Address | Timisoara; Rumania; | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3035-7 | Medium | ||
Area | Expedition | Conference | SYNASC | ||
Notes | IAM; 600.044; 600.060; 605.203 | Approved | no | ||
Call Number | Admin @ si @ RGG2013c | Serial | 2299 | ||
Permanent link to this record | |||||
Author | S.Grau; Anna Puig; Sergio Escalera; Maria Salamo; Oscar Amoros | ||||
Title | Efficient complementary viewpoint selection in volume rendering | Type | Conference Article | ||
Year | 2013 | Publication | 21st WSCG Conference on Computer Graphics, | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Dual camera; Visualization; Interactive Interfaces; Dynamic Time Warping. | ||||
Abstract | A major goal of visualization is to appropriately express knowledge of scientific data. Generally, gathering visual information contained in the volume data often requires a lot of expertise from the final user to setup the parameters of the visualization. One way of alleviating this problem is to provide the position of inner structures with different viewpoint locations to enhance the perception and construction of the mental image. To this end, traditional illustrations use two or three different views of the regions of interest. Similarly, with the aim of assisting the users to easily place a good viewpoint location, this paper proposes an automatic and interactive method that locates different complementary viewpoints from a reference camera in volume datasets. Specifically, the proposed method combines the quantity of information each camera provides for each structure and the shape similarity of the projections of the remaining viewpoints based on Dynamic Time Warping. The selected complementary viewpoints allow a better understanding of the focused structure in several applications. Thus, the user interactively receives feedback based on several viewpoints that helps him to understand the visual information. A live-user evaluation on different data sets show a good convergence to useful complementary viewpoints. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-808694374-9 | Medium | ||
Area | Expedition | Conference | WSCG | ||
Notes | HuPBA; 600.046;MILAB | Approved | no | ||
Call Number | Admin @ si @ GPE2013a | Serial | 2255 | ||
Permanent link to this record | |||||
Author | Antonio Lopez; David Vazquez; Gabriel Villalonga | ||||
Title | Data for Training Models, Domain Adaptation | Type | Book Chapter | ||
Year | 2018 | Publication | Intelligent Vehicles. Enabling Technologies and Future Developments | Abbreviated Journal | |
Volume | Issue | Pages | 395–436 | ||
Keywords | Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data | ||||
Abstract | Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ LVV2018 | Serial | 3047 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera | ||||
Title | Deteccion automatica de la dominancia en conversaciones diadicas | Type | Journal Article | ||
Year | 2010 | Publication | Escritos de Psicologia | Abbreviated Journal | EP |
Volume | 3 | Issue | 2 | Pages | 41–45 |
Keywords | Dominance detection; Non-verbal communication; Visual features | ||||
Abstract | Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1989-3809 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HUPBA; OR; MILAB;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EMV2010 | Serial | 1315 | ||
Permanent link to this record | |||||
Author | Smriti Joshi; Richard Osuala; Carlos Martin-Isla; Victor M.Campello; Carla Sendra-Balcells; Karim Lekadir; Sergio Escalera | ||||
Title | nn-UNet Training on CycleGAN-Translated Images for Cross-modal Domain Adaptation in Biomedical Imaging | Type | Conference Article | ||
Year | 2022 | Publication | International MICCAI Brainlesion Workshop | Abbreviated Journal | |
Volume | 12963 | Issue | Pages | 540–551 | |
Keywords | Domain adaptation; Vestibular schwannoma (VS); Deep learning; nn-UNet; CycleGAN | ||||
Abstract | In recent years, deep learning models have considerably advanced the performance of segmentation tasks on Brain Magnetic Resonance Imaging (MRI). However, these models show a considerable performance drop when they are evaluated on unseen data from a different distribution. Since annotation is often a hard and costly task requiring expert supervision, it is necessary to develop ways in which existing models can be adapted to the unseen domains without any additional labelled information. In this work, we explore one such technique which extends the CycleGAN [2] architecture to generate label-preserving data in the target domain. The synthetic target domain data is used to train the nn-UNet [3] framework for the task of multi-label segmentation. The experiments are conducted and evaluated on the dataset [1] provided in the ‘Cross-Modality Domain Adaptation for Medical Image Segmentation’ challenge [23] for segmentation of vestibular schwannoma (VS) tumour and cochlea on contrast enhanced (ceT1) and high resolution (hrT2) MRI scans. In the proposed approach, our model obtains dice scores (DSC) 0.73 and 0.49 for tumour and cochlea respectively on the validation set of the dataset. This indicates the applicability of the proposed technique to real-world problems where data may be obtained by different acquisition protocols as in [1] where hrT2 images are more reliable, safer, and lower-cost alternative to ceT1. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ JOM2022 | Serial | 3800 | ||
Permanent link to this record | |||||
Author | Angel Sappa; Patricia Suarez; Henry Velesaca; Dario Carpio | ||||
Title | Domain Adaptation in Image Dehazing: Exploring the Usage of Images from Virtual Scenarios | Type | Conference Article | ||
Year | 2022 | Publication | 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 85-92 | ||
Keywords | Domain adaptation; Synthetic hazed dataset; Dehazing | ||||
Abstract | This work presents a novel domain adaptation strategy for deep learning-based approaches to solve the image dehazing
problem. Firstly, a large set of synthetic images is generated by using a realistic 3D graphic simulator; these synthetic images contain different densities of haze, which are used for training the model that is later adapted to any real scenario. The adaptation process requires just a few images to fine-tune the model parameters. The proposed strategy allows overcoming the limitation of training a given model with few images. In other words, the proposed strategy implements the adaptation of a haze removal model trained with synthetic images to real scenarios. It should be noticed that it is quite difficult, if not impossible, to have large sets of pairs of real-world images (with and without haze) to train in a supervised way dehazing algorithms. Experimental results are provided showing the validity of the proposed domain adaptation strategy. |
||||
Address | Lisboa; Portugal; July 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CGVCVIP | ||
Notes | MSIAU; no proj | Approved | no | ||
Call Number | Admin @ si @ SSV2022 | Serial | 3804 | ||
Permanent link to this record |