toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author David Vazquez; David Geronimo; Antonio Lopez edit   pdf
openurl 
  Title The effect of the distance in pedestrian detection Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 149 Issue Pages  
  Keywords Pedestrian Detection  
  Abstract Pedestrian accidents are one of the leading preventable causes of death. In order to reduce the number of accidents, in the last decade the pedestrian protection systems have been introduced, a special type of advanced driver assistance systems, in witch an on-board camera explores the road ahead for possible collisions with pedestrians in order to warn the driver or perform braking actions. As a result of the variability of the appearance, pose and size, pedestrian detection is a very challenging task. So many techniques, models and features have been proposed to solve the problem. As the appearance of pedestrians varies signi cantly as a function of distance, a system based on multiple classi ers specialized on diferent depths is likely to improve the overall performance with respect to a typical system based on a general detector. Accordingly, the main aim of this work is to explore the e ect of the distance in pedestrian detection. We have evaluated three pedestrian detectors (HOG, HAAR and EOH) in two di erent databases (INRIA and Daimler09) for two di erent sizes (small and big). By a extensive set of experiments we answer to questions like which datasets and evaluation methods are the most adequate, which is the best method for each size of the pedestrians and why or how do the method optimum parameters vary with respect to the distance  
  Address  
  Corporate Author Thesis Master's thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference M.Sc.  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ VGL2009 Serial 1669  
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva edit   pdf
doi  openurl
  Title Multi-face tracking by extended bag-of-tracklets in egocentric photo-streams Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume (down) 149 Issue Pages 146-156  
  Keywords  
  Abstract Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ ADR2016b Serial 2742  
Permanent link to this record
 

 
Author Gerard Canal; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title A Real-time Human-Robot Interaction system based on gestures for assistive scenarios Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume (down) 149 Issue Pages 65-77  
  Keywords Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation  
  Abstract Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ CEA2016 Serial 2768  
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny; Gaurav Harit edit   pdf
url  doi
openurl 
  Title Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding Type Journal Article
  Year 2021 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume (down) 149 Issue Pages 164-171  
  Keywords  
  Abstract Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ DGV2021 Serial 3364  
Permanent link to this record
 

 
Author Diego Alejandro Cheda edit  openurl
  Title Monocular egomotion estimation for ADAS application Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 148 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Ph.D. thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ Che2009 Serial 2402  
Permanent link to this record
 

 
Author Iban Berganzo-Besga; Hector A. Orengo; Felipe Lumbreras; Paloma Aliende; Monica N. Ramsey edit  doi
openurl 
  Title Automated detection and classification of multi-cell Phytoliths using Deep Learning-Based Algorithms Type Journal Article
  Year 2022 Publication Journal of Archaeological Science Abbreviated Journal JArchSci  
  Volume (down) 148 Issue Pages 105654  
  Keywords  
  Abstract This paper presents an algorithm for automated detection and classification of multi-cell phytoliths, one of the major components of many archaeological and paleoenvironmental deposits. This identification, based on phytolith wave pattern, is made using a pretrained VGG19 deep learning model. This approach has been tested in three key phytolith genera for the study of agricultural origins in Near East archaeology: Avena, Hordeum and Triticum. Also, this classification has been validated at species-level using Triticum boeoticum and dicoccoides images. Due to the diversity of microscopes, cameras and chemical treatments that can influence images of phytolith slides, three types of data augmentation techniques have been implemented: rotation of the images at 45-degree angles, random colour and brightness jittering, and random blur/sharpen. The implemented workflow has resulted in an overall accuracy of 93.68% for phytolith genera, improving previous attempts. The algorithm has also demonstrated its potential to automatize the classification of phytoliths species with an overall accuracy of 100%. The open code and platforms employed to develop the algorithm assure the method's accessibility, reproducibility and reusability.  
  Address December 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; MACO; 600.167 Approved no  
  Call Number Admin @ si @ BOL2022 Serial 3753  
Permanent link to this record
 

 
Author Mohammad Momeny; Ali Asghar Neshat; Ahmad Jahanbakhshi; Majid Mahmoudi; Yiannis Ampatzidis; Petia Radeva edit  url
openurl 
  Title Grading and fraud detection of saffron via learning-to-augment incorporated Inception-v4 CNN Type Journal Article
  Year 2023 Publication Food Control Abbreviated Journal FC  
  Volume (down) 147 Issue Pages 109554  
  Keywords  
  Abstract Saffron is a well-known product in the food industry. It is one of the spices that are sometimes adulterated with the sole motive of gaining more economic profit. Today, machine vision systems are widely used in controlling the quality of food and agricultural products as a new, non-destructive, and inexpensive approach. In this study, a machine vision system based on deep learning was used to detect fraud and saffron quality. A dataset of 1869 images was created and categorized in 6 classes including: dried saffron stigma using a dryer; dried saffron stigma using pressing method; pure stem of saffron; sunflower; saffron stem mixed with food coloring; and corn silk mixed with food coloring. A Learning-to-Augment incorporated Inception-v4 Convolutional Neural Network (LAII-v4 CNN) was developed for grading and fraud detection of saffron in images captured by smartphones. The best policies of data augmentation were selected with the proposed LAII-v4 CNN using images corrupted by Gaussian, speckle, and impulse noise to address overfitting the model. The proposed LAII-v4 CNN compared with regular CNN-based methods and traditional classifiers. Ensemble of Bagged Decision Trees, Ensemble of Boosted Decision Trees, k-Nearest Neighbor, Random Under-sampling Boosted Trees, and Support Vector Machine were used for classification of the features extracted by Histograms of Oriented Gradients and Local Binary Patterns, and selected by the Principal Component Analysis. The results showed that the proposed LAII-v4 CNN with an accuracy of 99.5% has achieved the best performance by employing batch normalization, Dropout, and leaky ReLU.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ MNJ2023 Serial 3882  
Permanent link to this record
 

 
Author Enric Sala edit  openurl
  Title Off-line person-dependent signature verification Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 146 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ Sal2009 Serial 2400  
Permanent link to this record
 

 
Author Farshad Nourbakhsh edit  openurl
  Title Colour logo recognition Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 145 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ Nou2009 Serial 2399  
Permanent link to this record
 

 
Author Katerine Diaz; Francesc J. Ferri; Aura Hernandez-Sabate edit   pdf
url  doi
openurl 
  Title An overview of incremental feature extraction methods based on linear subspaces Type Journal Article
  Year 2018 Publication Knowledge-Based Systems Abbreviated Journal KBS  
  Volume (down) 145 Issue Pages 219-235  
  Keywords  
  Abstract With the massive explosion of machine learning in our day-to-day life, incremental and adaptive learning has become a major topic, crucial to keep up-to-date and improve classification models and their corresponding feature extraction processes. This paper presents a categorized overview of incremental feature extraction based on linear subspace methods which aim at incorporating new information to the already acquired knowledge without accessing previous data. Specifically, this paper focuses on those linear dimensionality reduction methods with orthogonal matrix constraints based on global loss function, due to the extensive use of their batch approaches versus other linear alternatives. Thus, we cover the approaches derived from Principal Components Analysis, Linear Discriminative Analysis and Discriminative Common Vector methods. For each basic method, its incremental approaches are differentiated according to the subspace model and matrix decomposition involved in the updating process. Besides this categorization, several updating strategies are distinguished according to the amount of data used to update and to the fact of considering a static or dynamic number of classes. Moreover, the specific role of the size/dimension ratio in each method is considered. Finally, computational complexity, experimental setup and the accuracy rates according to published results are compiled and analyzed, and an empirical evaluation is done to compare the best approach of each kind.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0950-7051 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ DFH2018 Serial 3090  
Permanent link to this record
 

 
Author Jose Carlos Rubio edit  openurl
  Title Graph matching based on graphical models with application to vehicle tracking and classification at night Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 144 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number Admin @ si @ Rub2009 Serial 2398  
Permanent link to this record
 

 
Author Joakim Bruslund Haurum; Meysam Madadi; Sergio Escalera; Thomas B. Moeslund edit  doi
openurl 
  Title Multi-scale hybrid vision transformer and Sinkhorn tokenizer for sewer defect classification Type Journal Article
  Year 2022 Publication Automation in Construction Abbreviated Journal AC  
  Volume (down) 144 Issue Pages 104614  
  Keywords Sewer Defect Classification; Vision Transformers; Sinkhorn-Knopp; Convolutional Neural Networks; Closed-Circuit Television; Sewer Inspection  
  Abstract A crucial part of image classification consists of capturing non-local spatial semantics of image content. This paper describes the multi-scale hybrid vision transformer (MSHViT), an extension of the classical convolutional neural network (CNN) backbone, for multi-label sewer defect classification. To better model spatial semantics in the images, features are aggregated at different scales non-locally through the use of a lightweight vision transformer, and a smaller set of tokens was produced through a novel Sinkhorn clustering-based tokenizer using distinct cluster centers. The proposed MSHViT and Sinkhorn tokenizer were evaluated on the Sewer-ML multi-label sewer defect classification dataset, showing consistent performance improvements of up to 2.53 percentage points.  
  Address Dec 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Admin @ si @ BME2022c Serial 3780  
Permanent link to this record
 

 
Author Ruben Tito; Dimosthenis Karatzas; Ernest Valveny edit   pdf
doi  openurl
  Title Hierarchical multimodal transformers for Multi-Page DocVQA Type Journal Article
  Year 2023 Publication Pattern Recognition Abbreviated Journal PR  
  Volume (down) 144 Issue Pages 109834  
  Keywords  
  Abstract Document Visual Question Answering (DocVQA) refers to the task of answering questions from document images. Existing work on DocVQA only considers single-page documents. However, in real scenarios documents are mostly composed of multiple pages that should be processed altogether. In this work we extend DocVQA to the multi-page scenario. For that, we first create a new dataset, MP-DocVQA, where questions are posed over multi-page documents instead of single pages. Second, we propose a new hierarchical method, Hi-VT5, based on the T5 architecture, that overcomes the limitations of current methods to process long multi-page documents. The proposed method is based on a hierarchical transformer architecture where the encoder summarizes the most relevant information of every page and then, the decoder takes this summarized information to generate the final answer. Through extensive experimentation, we demonstrate that our method is able, in a single stage, to answer the questions and provide the page that contains the relevant information to find the answer, which can be used as a kind of explainability measure.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.155; 600.121 Approved no  
  Call Number Admin @ si @ TKV2023 Serial 3825  
Permanent link to this record
 

 
Author Ruben Tito; Dimosthenis Karatzas; Ernest Valveny edit   pdf
url  openurl
  Title Hierarchical multimodal transformers for Multipage DocVQA Type Journal Article
  Year 2023 Publication Pattern Recognition Abbreviated Journal PR  
  Volume (down) 144 Issue 109834 Pages  
  Keywords  
  Abstract Existing work on DocVQA only considers single-page documents. However, in real applications documents are mostly composed of multiple pages that should be processed altogether. In this work, we propose a new multimodal hierarchical method Hi-VT5, that overcomes the limitations of current methods to process long multipage documents. In contrast to previous hierarchical methods that focus on different semantic granularity (He et al., 2021) or different subtasks (Zhou et al., 2022) used in image classification. Our method is a hierarchical transformer architecture where the encoder learns to summarize the most relevant information of every page and then, the decoder uses this summarized representation to generate the final answer, following a bottom-up approach. Moreover, due to the lack of multipage DocVQA datasets, we also introduce MP-DocVQA, an extension of SP-DocVQA where questions are posed over multipage documents instead of single pages. Through extensive experimentation, we demonstrate that Hi-VT5 is able, in a single stage, to answer the questions and provide the page that contains the answer, which can be used as a kind of explainability measure.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ TKV2023 Serial 3836  
Permanent link to this record
 

 
Author Jaume Gibert edit  openurl
  Title Learning structural representations and graph matching paradigms in the context of object recognition Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume (down) 143 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ Gib2009 Serial 2397  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: