|
Henry Velesaca, Patricia Suarez, Dario Carpio, Rafael E. Rivadeneira, Angel Sanchez, & Angel Morera. (2022). Video Analytics in Urban Environments: Challenges and Approaches. In ICT Applications for Smart Cities (Vol. 224, pp. 101–121). ISRL. Springer.
Abstract: This chapter reviews state-of-the-art approaches generally present in the pipeline of video analytics on urban scenarios. A typical pipeline is used to cluster approaches in the literature, including image preprocessing, object detection, object classification, and object tracking modules. Then, a review of recent approaches for each module is given. Additionally, applications and datasets generally used for training and evaluating the performance of these approaches are included. This chapter does not pretend to be an exhaustive review of state-of-the-art video analytics in urban environments but rather an illustration of some of the different recent contributions. The chapter concludes by presenting current trends in video analytics in the urban scenario field.
|
|
|
Angel Sappa (Ed.). (2022). ICT Applications for Smart Cities (Vol. 224). ISRL. Springer.
Abstract: Part of the book series: Intelligent Systems Reference Library (ISRL)
This book is the result of four-year work in the framework of the Ibero-American Research Network TICs4CI funded by the CYTED program. In the following decades, 85% of the world's population is expected to live in cities; hence, urban centers should be prepared to provide smart solutions for problems ranging from video surveillance and intelligent mobility to the solid waste recycling processes, just to mention a few. More specifically, the book describes underlying technologies and practical implementations of several successful case studies of ICTs developed in the following smart city areas:
• Urban environment monitoring
• Intelligent mobility
• Waste recycling processes
• Video surveillance
• Computer-aided diagnose in healthcare systems
• Computer vision-based approaches for efficiency in production processes
The book is intended for researchers and engineers in the field of ICTs for smart cities, as well as to anyone who wants to know about state-of-the-art approaches and challenges on this field.
Keywords: Computational Intelligence; Intelligent Systems; Smart Cities; ICT Applications; Machine Learning; Pattern Recognition; Computer Vision; Image Processing
|
|
|
Victoria Ruiz, Angel Sanchez, Jose F. Velez, & Bogdan Raducanu. (2022). Waste Classification with Small Datasets and Limited Resources. In ICT Applications for Smart Cities. Intelligent Systems Reference Library (Vol. 224, pp. 185–203). ISRL. Springer.
Abstract: Automatic waste recycling has become a very important societal challenge nowadays, raising people’s awareness for a cleaner environment and a more sustainable lifestyle. With the transition to Smart Cities, and thanks to advanced ICT solutions, this problem has received a new impulse. The waste recycling focus has shifted from general waste treating facilities to an individual responsibility, where each person should become aware of selective waste separation. The surge of the mobile devices, accompanied by a significant increase in computation power, has potentiated and facilitated this individual role. An automated image-based waste classification mechanism can help with a more efficient recycling and a reduction of contamination from residuals. Despite the good results achieved with the deep learning methodologies for this task, the Achille’s heel is that they require large neural networks which need significant computational resources for training and therefore are not suitable for mobile devices. To circumvent this apparently intractable problem, we will rely on knowledge distillation in order to transfer the network’s knowledge from a larger network (called ‘teacher’) to a smaller, more compact one, (referred as ‘student’) and thus making it possible the task of image classification on a device with limited resources. For evaluation, we considered as ‘teachers’ large architectures such as InceptionResNet or DenseNet and as ‘students’, several configurations of the MobileNets. We used the publicly available TrashNet dataset to demonstrate that the distillation process does not significantly affect system’s performance (e.g. classification accuracy) of the student network.
|
|
|
Sergio Escalera, Alicia Fornes, Oriol Pujol, Josep Llados, & Petia Radeva. (2007). Multi-class Binary Object Categorization using Blurred Shape Models. In Progress in Pattern Recognition, Image Analysis and Applications, 12th Iberoamerican Congress on Pattern (Vol. 4756, 773–782). LCNS.
|
|
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Hatem A. Rashwan, Estefania Talavera, Syeda Furruka Banu, Petia Radeva, et al. (2018). MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams. In European Conference on Computer Vision workshops (pp. 423–433). LCNS.
Abstract: First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams.
|
|
|
Muhammad Anwer Rao, David Vazquez, & Antonio Lopez. (2011). Opponent Colors for Human Detection. In J. Vitria, J.M. Sanches, & M. Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 363–370). LNCS. Berlin Heidelberg: Springer.
Abstract: Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.
Keywords: Pedestrian Detection; Color; Part Based Models
|
|
|
Panagiota Spyridonos, Fernando Vilariño, Jordi Vitria, Fernando Azpiroz, & Petia Radeva. (2006). Anisotropic Feature Extraction from Endoluminal Images for Detection of Intestinal Contractions. In and J. Sporring M. N. R. Larsen (Ed.), 9th International Conference on Medical Image Computing and Computer–Assisted Intervention (Vol. 4191, 161–168). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: Wireless endoscopy is a very recent and at the same time unique technique allowing to visualize and study the occurrence of con- tractions and to analyze the intestine motility. Feature extraction is es- sential for getting efficient patterns to detect contractions in wireless video endoscopy of small intestine. We propose a novel method based on anisotropic image filtering and efficient statistical classification of con- traction features. In particular, we apply the image gradient tensor for mining informative skeletons from the original image and a sequence of descriptors for capturing the characteristic pattern of contractions. Fea- tures extracted from the endoluminal images were evaluated in terms of their discriminatory ability in correct classifying images as either belong- ing to contractions or not. Classification was performed by means of a support vector machine classifier with a radial basis function kernel. Our classification rates gave sensitivity of the order of 90.84% and specificity of the order of 94.43% respectively. These preliminary results highlight the high efficiency of the selected descriptors and support the feasibility of the proposed method in assisting the automatic detection and analysis of contractions.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, Carolina Malagelada, & Petia Radeva. (2006). Linear Radial Patterns Characterization for Automatic Detection of Tonic Intestinal Contractions. In .F. Mart ́ınez-Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 178–187). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: This work tackles the categorization of general linear radial patterns by means of the valleys and ridges detection and the use of descriptors of directional information, which are provided by steerable filters in different regions of the image. We successfully apply our proposal in the specific case of automatic detection of tonic contractions in video capsule endoscopy, which represent a paradigmatic example of linear radial patterns.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, Carolina Malagelada, & Petia Radeva. (2006). A Machine Learning framework using SOMs: Applications in the Intestinal Motility Assessment. In J.P. Martinez–Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 188–197). LNCS. Berlin-Heidelberg: Springer Verlag.
Abstract: Small Bowel Motility Assessment by means of Wireless Capsule Video Endoscopy constitutes a novel clinical methodology in which a capsule with a micro-camera attached to it is swallowed by the patient, emitting a RF signal which is recorded as a video of its trip throughout the gut. In order to overcome the main drawbacks associated with this technique -mainly related to the large amount of visualization time required-, our efforts have been focused on the development of a machine learning system, built up in sequential stages, which provides the specialists with the useful part of the video, rejecting those parts not valid for analysis. We successfully used Self Organized Maps in a general semi-supervised framework with the aim of tackling the different learning stages of our system. The analysis of the diverse types of images and the automatic detection of intestinal contractions is performed under the perspective of intestinal motility assessment in a clinical environment.
|
|
|
Salim Jouili, Salvatore Tabbone, & Ernest Valveny. (2010). Comparing Graph Similarity Measures for Graphical Recognition. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 37–48). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we evaluate four graph distance measures. The analysis is performed for document retrieval tasks. For this aim, different kind of documents are used including line drawings (symbols), ancient documents (ornamental letters), shapes and trademark-logos. The experimental results show that the performance of each graph distance measure depends on the kind of data and the graph representation technique.
|
|
|
Dani Rowe, Jordi Gonzalez, Ivan Huerta, & Juan J. Villanueva. (2007). On Reasoning over Tracking Events. In 15th Scandinavian Conference on Image Analysis (Vol. 4522, 502–511). LNCS.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2007). Efficient Facial Expression Recognition for Human Robot Interaction. In Computational and Ambient Intelligence, 9th International Work–Conference on Artificial Neural Networks (Vol. 4507, 700–708). LNCS.
|
|
|
W. Liu, & Josep Llados. (2006). Graphics Recognition. Ten Years Review and Future Perspectives (Vol. 3926). LNCS.
|
|
|
Fadi Dornaika, & Angel Sappa. (2007). Real-time Vehicle Ego-Motion using Stereo Pairs and Particle Filters. In Int. Conf. on Image Analysis and Recognition, (Vol. 4633, 469–480). LNCS.
|
|
|
David Rotger, Petia Radeva, E Fernandez-Nofrerias, & J. Mauri. (2007). Blood Detection in IVUS Images for 3D Volume of Lumen Changes Measurement Due to Different Drugs Administration. In Computer Analysis of Images and Patterns, 12th International Conference (Vol. 4673, 285–292). LNCS.
|
|