Home | << 1 2 3 4 5 6 7 8 9 10 >> |
Records | |||||
---|---|---|---|---|---|
Author | Henry Velesaca; Patricia Suarez; Angel Sappa; Dario Carpio; Rafael E. Rivadeneira; Angel Sanchez | ||||
Title | Review on Common Techniques for Urban Environment Video Analytics | Type | Conference Article | ||
Year | 2022 | Publication | Anais do III Workshop Brasileiro de Cidades Inteligentes | Abbreviated Journal | |
Volume | Issue | Pages | 107-118 | ||
Keywords | Video Analytics; Review; Urban Environments; Smart Cities | ||||
Abstract | This work compiles the different computer vision-based approaches
from the state-of-the-art intended for video analytics in urban environments. The manuscript groups the different approaches according to the typical modules present in video analysis, including image preprocessing, object detection, classification, and tracking. This proposed pipeline serves as a basic guide to representing these most representative approaches in this topic of video analysis that will be addressed in this work. Furthermore, the manuscript is not intended to be an exhaustive review of the most advanced approaches, but only a list of common techniques proposed to address recurring problems in this field. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WBCI | ||
Notes | MSIAU; 601.349 | Approved | no | ||
Call Number | Admin @ si @ VSS2022 | Serial | 3773 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Asma Bensalah; Cristina Carmona_Duarte; Jialuo Chen; Miguel A. Ferrer; Andreas Fischer; Josep Llados; Cristina Martin; Eloy Opisso; Rejean Plamondon; Anna Scius-Bertrand; Josep Maria Tormos | ||||
Title | The RPM3D Project: 3D Kinematics for Remote Patient Monitoring | Type | Conference Article | ||
Year | 2022 | Publication | Intertwining Graphonomics with Human Movements. 20th International Conference of the International Graphonomics Society, IGS 2022 | Abbreviated Journal | |
Volume | 13424 | Issue | Pages | 217-226 | |
Keywords | Healthcare applications; Kinematic; Theory of Rapid Human Movements; Human activity recognition; Stroke rehabilitation; 3D kinematics | ||||
Abstract | This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute (https://www.guttmann.com/en/) (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases. | ||||
Address | June 7-9, 2022, Las Palmas de Gran Canaria, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IGS | ||
Notes | DAG; 600.121; 600.162; 602.230; 600.140 | Approved | no | ||
Call Number | Admin @ si @ FBC2022 | Serial | 3739 | ||
Permanent link to this record | |||||
Author | Aneesh Rangnekar; Zachary Mulhollan; Anthony Vodacek; Matthew Hoffman; Angel Sappa; Erik Blasch; Jun Yu; Liwen Zhang; Shenshen Du; Hao Chang; Keda Lu; Zhong Zhang; Fang Gao; Ye Yu; Feng Shuang; Lei Wang; Qiang Ling; Pranjay Shyam; Kuk-Jin Yoon; Kyung-Soo Kim | ||||
Title | Semi-Supervised Hyperspectral Object Detection Challenge Results – PBVS 2022 | Type | Conference Article | ||
Year | 2022 | Publication | IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 390-398 | ||
Keywords | Training; Computer visio; Conferences; Training data; Object detection; Semisupervised learning; Transformers | ||||
Abstract | This paper summarizes the top contributions to the first semi-supervised hyperspectral object detection (SSHOD) challenge, which was organized as a part of the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop at the Computer Vision and Pattern Recognition (CVPR) conference. The SSHODC challenge is a first-of-its-kind hyperspectral dataset with temporally contiguous frames collected from a university rooftop observing a 4-way vehicle intersection over a period of three days. The dataset contains a total of 2890 frames, captured at an average resolution of 1600 × 192 pixels, with 51 hyperspectral bands from 400nm to 900nm. SSHOD challenge uses 989 images as the training set, 605 images as validation set and 1296 images as the evaluation (test) set. Each set was acquired on a different day to maximize the variance in weather conditions. Labels are provided for 10% of the annotated data, hence formulating a semi-supervised learning task for the participants which is evaluated in terms of average precision over the entire set of classes, as well as individual moving object classes: namely vehicle, bus and bike. The challenge received participation registration from 38 individuals, with 8 participating in the validation phase and 3 participating in the test phase. This paper describes the dataset acquisition, with challenge formulation, proposed methods and qualitative and quantitative results. | ||||
Address | New Orleans; USA; June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MSIAU; no menciona | Approved | no | ||
Call Number | Admin @ si @ RMV2022 | Serial | 3774 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla | ||||
Title | Thermal Image Super-Resolution: A Novel Unsupervised Approach | Type | Conference Article | ||
Year | 2022 | Publication | International Joint Conference on Computer Vision, Imaging and Computer Graphics | Abbreviated Journal | |
Volume | 1474 | Issue | Pages | 495–506 | |
Keywords | |||||
Abstract | This paper proposes the use of a CycleGAN architecture for thermal image super-resolution under a transfer domain strategy, where middle-resolution images from one camera are transferred to a higher resolution domain of another camera. The proposed approach is trained with a large dataset acquired using three thermal cameras at different resolutions. An unsupervised learning process is followed to train the architecture. Additional loss function is proposed trying to improve results from the state of the art approaches. Following the first thermal image super-resolution challenge (PBVS-CVPR2020) evaluations are performed. A comparison with previous works is presented showing the proposed approach reaches the best results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MSIAU; 600.130 | Approved | no | ||
Call Number | Admin @ si @ RSV2022d | Serial | 3776 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla | ||||
Title | Multi-Image Super-Resolution for Thermal Images | Type | Conference Article | ||
Year | 2022 | Publication | 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022) | Abbreviated Journal | |
Volume | 4 | Issue | Pages | 635-642 | |
Keywords | Thermal Images; Multi-view; Multi-frame; Super-Resolution; Deep Learning; Attention Block | ||||
Abstract | This paper proposes a novel CNN architecture for the multi-thermal image super-resolution problem. In the proposed scheme, the multi-images are synthetically generated by downsampling and slightly shifting the given image; noise is also added to each of these synthesized images. The proposed architecture uses two
attention blocks paths to extract high-frequency details taking advantage of the large information extracted from multiple images of the same scene. Experimental results are provided, showing the proposed scheme has overcome the state-of-the-art approaches. |
||||
Address | Online; Feb 6-8, 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | MSIAU; 601.349 | Approved | no | ||
Call Number | Admin @ si @ RSV2022a | Serial | 3690 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Jin Kim; Dogun Kim; Zhihao Li; Yingchun Jian; Bo Yan; Leilei Cao; Fengliang Qi; Hongbin Wang Rongyuan Wu; Lingchen Sun; Yongqiang Zhao; Lin Li; Kai Wang; Yicheng Wang; Xuanming Zhang; Huiyuan Wei; Chonghua Lv; Qigong Sun; Xiaolin Tian; Zhuang Jia; Jiakui Hu; Chenyang Wang; Zhiwei Zhong; Xianming Liu; Junjun Jiang | ||||
Title | Thermal Image Super-Resolution Challenge Results – PBVS 2022 | Type | Conference Article | ||
Year | 2022 | Publication | IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 418-426 | ||
Keywords | |||||
Abstract | This paper presents results from the third Thermal Image Super-Resolution (TISR) challenge organized in the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop. The challenge uses the same thermal image dataset as the first two challenges, with 951 training images and 50 validation images at each resolution. A set of 20 images was kept aside for testing. The evaluation tasks were to measure the PSNR and SSIM between the SR image and the ground truth (HR thermal noisy image downsampled by four), and also to measure the PSNR and SSIM between the SR image and the semi-registered HR image (acquired with another camera). The results outperformed those from last year’s challenge, improving both evaluation metrics. This year, almost 100 teams participants registered for the challenge, showing the community’s interest in this hot topic. | ||||
Address | New Orleans; USA; June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MSIAU; no menciona | Approved | no | ||
Call Number | Admin @ si @ RSV2022c | Serial | 3775 | ||
Permanent link to this record | |||||
Author | Sergi Garcia Bordils; Andres Mafla; Ali Furkan Biten; Oren Nuriel; Aviad Aberdam; Shai Mazor; Ron Litman; Dimosthenis Karatzas | ||||
Title | Out-of-Vocabulary Challenge Report | Type | Conference Article | ||
Year | 2022 | Publication | Proceedings European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | 13804 | Issue | Pages | 359–375 | |
Keywords | |||||
Abstract | This paper presents final results of the Out-Of-Vocabulary 2022 (OOV) challenge. The OOV contest introduces an important aspect that is not commonly studied by Optical Character Recognition (OCR) models, namely, the recognition of unseen scene text instances at training time. The competition compiles a collection of public scene text datasets comprising of 326,385 images with 4,864,405 scene text instances, thus covering a wide range of data distributions. A new and independent validation and test set is formed with scene text instances that are out of vocabulary at training time. The competition was structured in two tasks, end-to-end and cropped scene text recognition respectively. A thorough analysis of results from baselines and different participants is presented. Interestingly, current state-of-the-art models show a significant performance gap under the newly studied setting. We conclude that the OOV dataset proposed in this challenge will be an essential area to be explored in order to develop scene text models that achieve more robust and generalized predictions. | ||||
Address | Tel-Aviv; Israel; October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | DAG; 600.155; 302.105; 611.002 | Approved | no | ||
Call Number | Admin @ si @ GMB2022 | Serial | 3771 | ||
Permanent link to this record | |||||
Author | Iban Berganzo-Besga; Hector A. Orengo; Felipe Lumbreras; Paloma Aliende; Monica N. Ramsey | ||||
Title | Automated detection and classification of multi-cell Phytoliths using Deep Learning-Based Algorithms | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Archaeological Science | Abbreviated Journal | JArchSci |
Volume | 148 | Issue | Pages | 105654 | |
Keywords | |||||
Abstract | This paper presents an algorithm for automated detection and classification of multi-cell phytoliths, one of the major components of many archaeological and paleoenvironmental deposits. This identification, based on phytolith wave pattern, is made using a pretrained VGG19 deep learning model. This approach has been tested in three key phytolith genera for the study of agricultural origins in Near East archaeology: Avena, Hordeum and Triticum. Also, this classification has been validated at species-level using Triticum boeoticum and dicoccoides images. Due to the diversity of microscopes, cameras and chemical treatments that can influence images of phytolith slides, three types of data augmentation techniques have been implemented: rotation of the images at 45-degree angles, random colour and brightness jittering, and random blur/sharpen. The implemented workflow has resulted in an overall accuracy of 93.68% for phytolith genera, improving previous attempts. The algorithm has also demonstrated its potential to automatize the classification of phytoliths species with an overall accuracy of 100%. The open code and platforms employed to develop the algorithm assure the method's accessibility, reproducibility and reusability. | ||||
Address | December 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO; 600.167 | Approved | no | ||
Call Number | Admin @ si @ BOL2022 | Serial | 3753 | ||
Permanent link to this record | |||||
Author | Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud | ||||
Title | A Novel Domain Transfer-Based Approach for Unsupervised Thermal Image Super-Resolution | Type | Journal Article | ||
Year | 2022 | Publication | Sensors | Abbreviated Journal | SENS |
Volume | 22 | Issue | 6 | Pages | 2254 |
Keywords | Thermal image super-resolution; unsupervised super-resolution; thermal images; attention module; semiregistered thermal images | ||||
Abstract | This paper presents a transfer domain strategy to tackle the limitations of low-resolution thermal sensors and generate higher-resolution images of reasonable quality. The proposed technique employs a CycleGAN architecture and uses a ResNet as an encoder in the generator along with an attention module and a novel loss function. The network is trained on a multi-resolution thermal image dataset acquired with three different thermal sensors. Results report better performance benchmarking results on the 2nd CVPR-PBVS-2021 thermal image super-resolution challenge than state-of-the-art methods. The code of this work is available online. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; | Approved | no | ||
Call Number | Admin @ si @ RSV2022b | Serial | 3688 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Dario Carpio; Angel Sappa; Henry Velesaca | ||||
Title | Transformer based Image Dehazing | Type | Conference Article | ||
Year | 2022 | Publication | 16th IEEE International Conference on Signal Image Technology & Internet Based System | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | atmospheric light; brightness component; computational cost; dehazing quality; haze-free image | ||||
Abstract | This paper presents a novel approach to remove non homogeneous haze from real images. The proposed method consists mainly of image feature extraction, haze removal, and image reconstruction. To accomplish this challenging task, we propose an architecture based on transformers, which have been recently introduced and have shown great potential in different computer vision tasks. Our model is based on the SwinIR an image restoration architecture based on a transformer, but by modifying the deep feature extraction module, the depth level of the model, and by applying a combined loss function that improves styling and adapts the model for the non-homogeneous haze removal present in images. The obtained results prove to be superior to those obtained by state-of-the-art models. | ||||
Address | Dijon; France; October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SITIS | ||
Notes | MSIAU; no proj | Approved | no | ||
Call Number | Admin @ si @ SCS2022 | Serial | 3803 | ||
Permanent link to this record | |||||
Author | Jorge Charco; Angel Sappa; Boris X. Vintimilla | ||||
Title | Human Pose Estimation through a Novel Multi-view Scheme | Type | Conference Article | ||
Year | 2022 | Publication | 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022) | Abbreviated Journal | |
Volume | 5 | Issue | Pages | 855-862 | |
Keywords | Multi-view Scheme; Human Pose Estimation; Relative Camera Pose; Monocular Approach | ||||
Abstract | This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human pose estimation problem. The proposed approach first obtains the human body joints of a set of images, which are captured from different views at the same time. Then, it enhances the obtained joints by using a
multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements in the accuracy of body joints estimations. |
||||
Address | On line; Feb 6, 2022 – Feb 8, 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2184-4321 | ISBN | 978-989-758-555-5 | Medium | |
Area | Expedition | Conference | VISAPP | ||
Notes | MSIAU; 600.160 | Approved | no | ||
Call Number | Admin @ si @ CSV2022 | Serial | 3689 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Gonzalo Pomboza-Junez; Angel Sappa | ||||
Title | LDC: Lightweight Dense CNN for Edge Detection | Type | Journal Article | ||
Year | 2022 | Publication | IEEE Access | Abbreviated Journal | ACCESS |
Volume | 10 | Issue | Pages | 68281-68290 | |
Keywords | |||||
Abstract | This paper presents a Lightweight Dense Convolutional (LDC) neural network for edge detection. The proposed model is an adaptation of two state-of-the-art approaches, but it requires less than 4% of parameters in comparison with these approaches. The proposed architecture generates thin edge maps and reaches the highest score (i.e., ODS) when compared with lightweight models (models with less than 1 million parameters), and reaches a similar performance when compare with heavy architectures (models with about 35 million parameters). Both quantitative and qualitative results and comparisons with state-of-the-art models, using different edge detection datasets, are provided. The proposed LDC does not use pre-trained weights and requires straightforward hyper-parameter settings. The source code is released at https://github.com/xavysp/LDC | ||||
Address | 27 June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO; 600.160; 600.167 | Approved | no | ||
Call Number | Admin @ si @ SPS2022 | Serial | 3751 | ||
Permanent link to this record | |||||
Author | Spencer Low; Oliver Nina; Angel Sappa; Erik Blasch; Nathan Inkawhich | ||||
Title | Multi-Modal Aerial View Object Classification Challenge Results – PBVS 2022 | Type | Conference Article | ||
Year | 2022 | Publication | IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 350-358 | ||
Keywords | |||||
Abstract | This paper details the results and main findings of the second iteration of the Multi-modal Aerial View Object Classification (MAVOC) challenge. The primary goal of both MAVOC challenges is to inspire research into methods for building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) imagery. Teams are encouraged to develop multi-modal approaches that incorporate complementary information from both domains. While the 2021 challenge showed a proof of concept that both modalities could be used together, the 2022 challenge focuses on the detailed multi-modal methods. The 2022 challenge uses the same UNIfied Coincident Optical and Radar for recognitioN (UNICORN) dataset and competition format that was used in 2021. Specifically, the challenge focuses on two tasks, (1) SAR classification and (2) SAR + EO classification. The bulk of this document is dedicated to discussing the top performing methods and describing their performance on our blind test set. Notably, all of the top ten teams outperform a Resnet-18 baseline. For SAR classification, the top team showed a 129% improvement over baseline and an 8% average improvement from the 2021 winner. The top team for SAR + EO classification shows a 165% improvement with a 32% average improvement over 2021. | ||||
Address | New Orleans; USA; June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ LNS2022 | Serial | 3768 | ||
Permanent link to this record | |||||
Author | Jorge Charco; Angel Sappa; Boris X. Vintimilla; Henry Velesaca | ||||
Title | Human Body Pose Estimation in Multi-view Environments | Type | Book Chapter | ||
Year | 2022 | Publication | ICT Applications for Smart Cities. Intelligent Systems Reference Library | Abbreviated Journal | |
Volume | 224 | Issue | Pages | 79-99 | |
Keywords | |||||
Abstract | This chapter tackles the challenging problem of human pose estimation in multi-view environments to handle scenes with self-occlusions. The proposed approach starts by first estimating the camera pose—extrinsic parameters—in multi-view scenarios; due to few real image datasets, different virtual scenes are generated by using a special simulator, for training and testing the proposed convolutional neural network based approaches. Then, these extrinsic parameters are used to establish the relation between different cameras into the multi-view scheme, which captures the pose of the person from different points of view at the same time. The proposed multi-view scheme allows to robustly estimate human body joints’ position even in situations where they are occluded. This would help to avoid possible false alarms in behavioral analysis systems of smart cities, as well as applications for physical therapy, safe moving assistance for the elderly among other. The chapter concludes by presenting experimental results in real scenes by using state-of-the-art and the proposed multi-view approaches. | ||||
Address | September 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | ISRL | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-031-06306-0 | Medium | ||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ CSV2022b | Serial | 3810 | ||
Permanent link to this record | |||||
Author | Henry Velesaca; Patricia Suarez; Dario Carpio; Rafael E. Rivadeneira; Angel Sanchez; Angel Morera | ||||
Title | Video Analytics in Urban Environments: Challenges and Approaches | Type | Book Chapter | ||
Year | 2022 | Publication | ICT Applications for Smart Cities | Abbreviated Journal | |
Volume | 224 | Issue | Pages | 101-121 | |
Keywords | |||||
Abstract | This chapter reviews state-of-the-art approaches generally present in the pipeline of video analytics on urban scenarios. A typical pipeline is used to cluster approaches in the literature, including image preprocessing, object detection, object classification, and object tracking modules. Then, a review of recent approaches for each module is given. Additionally, applications and datasets generally used for training and evaluating the performance of these approaches are included. This chapter does not pretend to be an exhaustive review of state-of-the-art video analytics in urban environments but rather an illustration of some of the different recent contributions. The chapter concludes by presenting current trends in video analytics in the urban scenario field. | ||||
Address | September 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | ISRL | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-031-06306-0 | Medium | ||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ VSC2022 | Serial | 3811 | ||
Permanent link to this record |