|
David Lloret, Joan Serrat, Antonio Lopez and Juan J. Villanueva. 2003. Ultrasound to MR Volume Registration for Brain Sinking Measurement. 1rst. Iberian Conference on Pattern Recognition and Image Analysis IbPRIA 2003.420–427. (LNCS.)
|
|
|
Judit Martinez, Eva Costa, P. Herreros, Antonio Lopez and Juan J. Villanueva. 2003. TV-Screen Quality Inspection by Artificial Vision. Proceedings SPIE 5132, Sixth International Conference on Quality Control by Artificial Vision (QCAV 2003).
Abstract: A real-time vision system for TV screen quality inspection is introduced. The whole system consists of eight cameras and one processor per camera. It acquires and processes 112 images in 6 seconds. The defects to be inspected can be grouped into four main categories (bubble, line-out, line reduction and landing) although there exists a large variability among each particular type of defect. The complexity of the whole inspection process has been reduced by dividing images into smaller ones and grouping the defects into frequency and intensity relevant ones. Tools such as mathematical morphology, Fourier transform, profile analysis and classification have been used. The performance of the system has been successfully proved against human operators in normal production conditions.
|
|
|
Jiaolong Xu, Peng Wang, Heng Yang and Antonio Lopez. 2019. Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving. IEEE International Conference on Robotics and Automation.2379–2384.
Abstract: Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.
|
|
|
Antonio Lopez and Joan Serrat. 1996. Tracing crease curves by solving a system of differential equations. ECCV 1996. (LNCS.)
|
|
|
Marçal Rusiñol, David Aldavert, Ricardo Toledo and Josep Llados. 2015. Towards Query-by-Speech Handwritten Keyword Spotting. 13th International Conference on Document Analysis and Recognition ICDAR2015.501–505.
Abstract: In this paper, we present a new querying paradigm for handwritten keyword spotting. We propose to represent handwritten word images both by visual and audio representations, enabling a query-by-speech keyword spotting system. The two representations are merged together and projected to a common sub-space in the training phase. This transform allows to, given a spoken query, retrieve word instances that were only represented by the visual modality. In addition, the same method can be used backwards at no additional cost to produce a handwritten text-tospeech system. We present our first results on this new querying mechanism using synthetic voices over the George Washington
dataset.
|
|
|
Muhammad Anwer Rao, Fahad Shahbaz Khan, Joost Van de Weijer and Jorma Laaksonen. 2017. Top-Down Deep Appearance Attention for Action Recognition. 20th Scandinavian Conference on Image Analysis.297–309. (LNCS.)
Abstract: Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.
Keywords: Action recognition; CNNs; Feature fusion
|
|
|
Xavier Soria, Yachuan Li, Mohammad Rouhani and Angel Sappa. 2023. Tiny and Efficient Model for the Edge Detection Generalization. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops.
Abstract: Most high-level computer vision tasks rely on low-level image operations as their initial processes. Operations such as edge detection, image enhancement, and super-resolution, provide the foundations for higher level image analysis. In this work we address the edge detection considering three main objectives: simplicity, efficiency, and generalization since current state-of-the-art (SOTA) edge detection models are increased in complexity for better accuracy. To achieve this, we present Tiny and Efficient Edge Detector (TEED), a light convolutional neural network with only 58K parameters, less than 0:2% of the state-of-the-art models. Training on the BIPED dataset takes less than 30 minutes, with each epoch requiring less than 5 minutes. Our proposed model is easy to train and it quickly converges within very first few epochs, while the predicted edge-maps are crisp and of high quality. Additionally, we propose a new dataset to test the generalization of edge detection, which comprises samples from popular images used in edge detection and image segmentation. The source code is available in https://github.com/xavysp/TEED.
|
|
|
Cristina Cañero and 16 others. 1999. Three-dimensional reconstruction and quantification of the coronary tree using intravascular ultrasound images. Proceedings of International Conference on Computer in Cardiology (CIC´99).
Abstract: In this paper we propose a new Computer Vision technique to reconstruct the vascular wall in space using a deformable model-based technique and compounding methods, based in biplane angiography and intravascular ultrasound data jicsion. It is also proposed a generalpurpose three-dimensional guided interpolation method. The three dimensional centerline of the vessel is reconstructed from geometrically corrected biplane angiographies using automatic segmentation methods and snakes. The IVUS image planes are located in the threedimensional space and correctly oriented. A led interpolation method based in B-SurJaces and snakes isused to fill the gaps among image planes
|
|
|
German Ros, Laura Sellart, Joanna Materzynska, David Vazquez and Antonio Lopez. 2016. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. 29th IEEE Conference on Computer Vision and Pattern Recognition.3234–3243.
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. The irruption of deep convolutional neural networks (DCNNs) allows to foresee obtaining reliable classifiers to perform such a visual task. However, DCNNs require to learn many parameters from raw images; thus, having a sufficient amount of diversified images with this class annotations is needed. These annotations are obtained by a human cumbersome labour specially challenging for semantic segmentation, since pixel-level annotations are required. In this paper, we propose to use a virtual world for automatically generating realistic synthetic images with pixel-level annotations. Then, we address the question of how useful can be such data for the task of semantic segmentation; in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic diversified collection of urban images, named SynthCity, with automatically generated class annotations. We use SynthCity in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments on a DCNN setting that show how the inclusion of SynthCity in the training stage significantly improves the performance of the semantic segmentation task
Keywords: Domain Adaptation; Autonomous Driving; Virtual Data; Semantic Segmentation
|
|
|
Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero and Yoshua Bengio. 2017. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.
Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.
In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
Keywords: Semantic Segmentation
|
|