|
Jiaolong Xu, Peng Wang, Heng Yang, & Antonio Lopez. (2019). Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving. In IEEE International Conference on Robotics and Automation (pp. 2379–2384).
Abstract: Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.
|
|
|
Sangeeth Reddy, Minesh Mathew, Lluis Gomez, Marçal Rusiñol, Dimosthenis Karatzas, & C.V. Jawahar. (2020). RoadText-1K: Text Detection and Recognition Dataset for Driving Videos. In IEEE International Conference on Robotics and Automation.
Abstract: Perceiving text is crucial to understand semantics of outdoor scenes and hence is a critical requirement to build intelligent systems for driver assistance and self-driving. Most of the existing datasets for text detection and recognition comprise still images and are mostly compiled keeping text in mind. This paper introduces a new ”RoadText-1K” dataset for text in driving videos. The dataset is 20 times larger than the existing largest dataset for text in videos. Our dataset comprises 1000 video clips of driving without any bias towards text and with annotations for text bounding boxes and transcriptions in every frame. State of the art methods for text detection,
recognition and tracking are evaluated on the new dataset and the results signify the challenges in unconstrained driving videos compared to existing datasets. This suggests that RoadText-1K is suited for research and development of reading systems, robust enough to be incorporated into more complex downstream tasks like driver assistance and self-driving. The dataset can be found at http://cvit.iiit.ac.in/research/
projects/cvit-projects/roadtext-1k
|
|
|
Alloy Das, Sanket Biswas, Umapada Pal, & Josep Llados. (2024). Diving into the Depths of Spotting Text in Multi-Domain Noisy Scenes. In IEEE International Conference on Robotics and Automation in PACIFICO.
Abstract: When used in a real-world noisy environment, the capacity to generalize to multiple domains is essential for any autonomous scene text spotting system. However, existing state-of-the-art methods employ pretraining and fine-tuning strategies on natural scene datasets, which do not exploit the feature interaction across other complex domains. In this work, we explore and investigate the problem of domain-agnostic scene text spotting, i.e., training a model on multi-domain source data such that it can directly generalize to target domains rather than being specialized for a specific domain or scenario. In this regard, we present the community a text spotting validation benchmark called Under-Water Text (UWT) for noisy underwater scenes to establish an important case study. Moreover, we also design an efficient super-resolution based end-to-end transformer baseline called DA-TextSpotter which achieves comparable or superior performance over existing text spotting architectures for both regular and arbitrary-shaped scene text spotting benchmarks in terms of both accuracy and model efficiency. The dataset, code and pre-trained models will be released upon acceptance.
|
|
|
Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis Karatzas, & Andrew Bagdanov. (2016). Improving Text Proposals for Scene Images with Fully Convolutional Networks. In 23rd International Conference on Pattern Recognition Workshops.
Abstract: Text Proposals have emerged as a class-dependent version of object proposals – efficient approaches to reduce the search space of possible text object locations in an image. Combined with strong word classifiers, text proposals currently yield top state of the art results in end-to-end scene text
recognition. In this paper we propose an improvement over the original Text Proposals algorithm of [1], combining it with Fully Convolutional Networks to improve the ranking of proposals. Results on the ICDAR RRC and the COCO-text datasets show superior performance over current state-of-the-art.
|
|
|
Fatemeh Noroozi, Marina Marjanovic, Angelina Njegus, Sergio Escalera, & Gholamreza Anbarjafari. (2016). Fusion of Classifier Predictions for Audio-Visual Emotion Recognition. In 23rd International Conference on Pattern Recognition Workshops.
Abstract: In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence
outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches.
|
|
|
Iiris Lusi, Sergio Escalera, & Gholamreza Anbarjafari. (2016). Human Head Pose Estimation on SASE database using Random Hough Regression Forests. In 23rd International Conference on Pattern Recognition Workshops (Vol. 10165). LNCS.
Abstract: In recent years head pose estimation has become an important task in face analysis scenarios. Given the availability of high resolution 3D sensors, the design of a high resolution head pose database would be beneficial for the community. In this paper, Random Hough Forests are used to estimate 3D head pose and location on a new 3D head database, SASE, which represents the baseline performance on the new data for an upcoming international head pose estimation competition. The data in SASE is acquired with a Microsoft Kinect 2 camera, including the RGB and depth information of 50 subjects with a large sample of head poses, allowing us to test methods for real-life scenarios. We briefly review the database while showing baseline head pose estimation results based on Random Hough Forests.
|
|
|
Gabriela Ramirez, Esau Villatoro, Bogdan Ionescu, Hugo Jair Escalante, Sergio Escalera, Martha Larson, et al. (2018). Overview of the Multimedia Information Processing for Personality & Social Networks Analysis Contes. In Multimedia Information Processing for Personality and Social Networks Analysis (MIPPSNA 2018).
|
|
|
Asma Bensalah, Jialuo Chen, Alicia Fornes, Cristina Carmona_Duarte, Josep Llados, & Miguel A. Ferrer. (2020). Towards Stroke Patients' Upper-limb Automatic Motor Assessment Using Smartwatches. In International Workshop on Artificial Intelligence for Healthcare Applications (Vol. 12661, pp. 476–489).
Abstract: Assessing the physical condition in rehabilitation scenarios is a challenging problem, since it involves Human Activity Recognition (HAR) and kinematic analysis methods. In addition, the difficulties increase in unconstrained rehabilitation scenarios, which are much closer to the real use cases. In particular, our aim is to design an upper-limb assessment pipeline for stroke patients using smartwatches. We focus on the HAR task, as it is the first part of the assessing pipeline. Our main target is to automatically detect and recognize four key movements inspired by the Fugl-Meyer assessment scale, which are performed in both constrained and unconstrained scenarios. In addition to the application protocol and dataset, we propose two detection and classification baseline methods. We believe that the proposed framework, dataset and baseline results will serve to foster this research field.
|
|
|
Roberto Morales, Juan Quispe, & Eduardo Aguilar. (2023). Exploring multi-food detection using deep learning-based algorithms. In 13th International Conference on Pattern Recognition Systems (pp. 1–7).
Abstract: People are becoming increasingly concerned about their diet, whether for disease prevention, medical treatment or other purposes. In meals served in restaurants, schools or public canteens, it is not easy to identify the ingredients and/or the nutritional information they contain. Currently, technological solutions based on deep learning models have facilitated the recording and tracking of food consumed based on the recognition of the main dish present in an image. Considering that sometimes there may be multiple foods served on the same plate, food analysis should be treated as a multi-class object detection problem. EfficientDet and YOLOv5 are object detection algorithms that have demonstrated high mAP and real-time performance on general domain data. However, these models have not been evaluated and compared on public food datasets. Unlike general domain objects, foods have more challenging features inherent in their nature that increase the complexity of detection. In this work, we performed a performance evaluation of Efficient-Det and YOLOv5 on three public food datasets: UNIMIB2016, UECFood256 and ChileanFood64. From the results obtained, it can be seen that YOLOv5 provides a significant difference in terms of both mAP and response time compared to EfficientDet in all datasets. Furthermore, YOLOv5 outperforms the state-of-the-art on UECFood256, achieving an improvement of more than 4% in terms of mAP@.50 over the best reported.
|
|
|
Gisel Bastidas-Guacho, Patricio Moreno, Boris X. Vintimilla, & Angel Sappa. (2023). Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches. In 13th International Conference on Pattern Recognition Systems (Vol. 14234, 25–36).
Abstract: Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion.
|
|
|
Santiago Segui, Michal Drozdzal, Petia Radeva, & Jordi Vitria. (2012). An Integrated Approach to Contextual Face Detection. In 1st International Conference on Pattern Recognition Applications and Methods (pp. 143–150). Springer.
Abstract: Face detection is, in general, based on content-based detectors. Nevertheless, the face is a non-rigid object with well defined relations with respect to the human body parts. In this paper, we propose to take benefit of the context information in order to improve content-based face detections. We propose a novel framework for integrating multiple content- and context-based detectors in a discriminative way. Moreover, we develop an integrated scoring procedure that measures the ’faceness’ of each hypothesis and is used to discriminate the detection results. Our approach detects a higher rate of faces while minimizing the number of false detections, giving an average increase of more than 10% in average precision when comparing it to state-of-the art face detectors
|
|
|
Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Monocular Egomotion Estimation based on Image Matching. In 1st International Conference on Pattern Recognition Applications and Methods (pp. 425–430).
|
|
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Multiple target tracking and identity linking under split, merge and occlusion of targets and observations. In 1st International Conference on Pattern Recognition Applications and Methods.
|
|
|
Ferran Diego, G.D. Evangelidis, & Joan Serrat. (2012). Night-time outdoor surveillance by mobile cameras. In 1st International Conference on Pattern Recognition Applications and Methods (Vol. 2, pp. 365–371).
Abstract: This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online change detection in night-time outdoor surveillance. Because of the camera movement, background frames are not available and must be “localized” in former sequences and registered with the current frames. To this end, we propose a Frame Localization And Registration (FLAR) approach that solves the problem efficiently. Frames of former sequences define a database which is queried by current frames in turn. To quickly retrieve nearest neighbors, database is indexed through a visual dictionary method based on the SURF descriptor. Furthermore, the frame localization is benefited by a temporal filter that exploits the temporal coherence of videos. Next, the recently proposed ECC alignment scheme is used to spatially register the synchronized frames. Finally, change detection methods apply to aligned frames in order to mark suspicious areas. Experiments with real night sequences recorded by in-vehicle cameras demonstrate the performance of the proposed method and verify its efficiency and effectiveness against other methods.
|
|
|
F. de la Torre, Jordi Vitria, Petia Radeva, & J. Melenchon. (2000). EigenFiltering for flexible Eigentracking. In 15 th International Conference on Pattern Recognition (Vol. 3, pp. 1118–1121).
|
|