toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergi Garcia Bordils; Dimosthenis Karatzas; Marçal Rusiñol edit  url
openurl 
  Title Accelerating Transformer-Based Scene Text Detection and Recognition via Token Pruning Type Conference Article
  Year 2023 Publication 17th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume 14192 Issue Pages 106-121  
  Keywords Scene Text Detection; Scene Text Recognition; Transformer Acceleration  
  Abstract Scene text detection and recognition is a crucial task in computer vision with numerous real-world applications. Transformer-based approaches are behind all current state-of-the-art models and have achieved excellent performance. However, the computational requirements of the transformer architecture makes training these methods slow and resource heavy. In this paper, we introduce a new token pruning strategy that significantly decreases training and inference times without sacrificing performance, striking a balance between accuracy and speed. We have applied this pruning technique to our own end-to-end transformer-based scene text understanding architecture. Our method uses a separate detection branch to guide the pruning of uninformative image features, which significantly reduces the number of tokens at the input of the transformer. Experimental results show how our network is able to obtain competitive results on multiple public benchmarks while running at significantly higher speeds.  
  Address (down) San Jose; CA; USA; August 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG Approved no  
  Call Number Admin @ si @ GKR2023a Serial 3907  
Permanent link to this record
 

 
Author Adarsh Tiwari; Sanket Biswas; Josep Llados edit  url
openurl 
  Title Can Pre-trained Language Models Help in Understanding Handwritten Symbols? Type Conference Article
  Year 2023 Publication 17th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume 14193 Issue Pages 199–211  
  Keywords  
  Abstract The emergence of transformer models like BERT, GPT-2, GPT-3, RoBERTa, T5 for natural language understanding tasks has opened the floodgates towards solving a wide array of machine learning tasks in other modalities like images, audio, music, sketches and so on. These language models are domain-agnostic and as a result could be applied to 1-D sequences of any kind. However, the key challenge lies in bridging the modality gap so that they could generate strong features beneficial for out-of-domain tasks. This work focuses on leveraging the power of such pre-trained language models and discusses the challenges in predicting challenging handwritten symbols and alphabets.  
  Address (down) San Jose; CA; USA; August 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG Approved no  
  Call Number Admin @ si @ TBL2023 Serial 3908  
Permanent link to this record
 

 
Author Aniol Lidon; Marc Bolaños; Mariella Dimiccoli; Petia Radeva; Maite Garolera; Xavier Giro edit   pdf
doi  isbn
openurl 
  Title Semantic Summarization of Egocentric Photo-Stream Events Type Conference Article
  Year 2017 Publication 2nd Workshop on Lifelogging Tools and Applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) San Francisco; USA; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-5503-2 Medium  
  Area Expedition Conference ACMW (LTA)  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ LBD2017 Serial 3024  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title 3D Scene Priors for Road Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 57–64  
  Keywords road detection  
  Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010a Serial 1302  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit  doi
isbn  openurl
  Title Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3066-3072  
  Keywords  
  Abstract This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RoS2010a Serial 1303  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Appearance in Virtual Scenarios for Pedestrian Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario Type Conference Article
  Year 2010 Publication 1st International Workshop on Computer Vision for Human-Robot Interaction Abbreviated Journal  
  Volume Issue Pages 32–39  
  Keywords 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010  
  Abstract This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
 
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference CVPRW  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2010a Serial 1309  
Permanent link to this record
 

 
Author David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  doi
isbn  openurl
  Title Fast and Robust Object Segmentation with the Integral Linear Classifier Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1046–1053  
  Keywords  
  Abstract We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ ARL2010a Serial 1311  
Permanent link to this record
 

 
Author Michal Drozdzal; Laura Igual; Petia Radeva; Jordi Vitria; Carolina Malagelada; Fernando Azpiroz edit  doi
isbn  openurl
  Title Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy Type Conference Article
  Year 2010 Publication IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis Abbreviated Journal  
  Volume Issue Pages 117–124  
  Keywords  
  Abstract Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference MMBIA  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DIR2010 Serial 1316  
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Reyes; Sergio Escalera; Petia Radeva edit  doi
isbn  openurl
  Title Spatio-Temporal GrabCut human segmentation for face and pose recovery Type Conference Article
  Year 2010 Publication IEEE International Workshop on Analysis and Modeling of Faces and Gestures Abbreviated Journal  
  Volume Issue Pages 33–40  
  Keywords  
  Abstract In this paper, we present a full-automatic Spatio-Temporal GrabCut human segmentation methodology. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model for seed initialization. Spatial information is included by means of Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, human segmentation is combined with Shape and Active Appearance Models to perform full face and pose recovery. Results over public data sets as well as proper human action base show a robust segmentation and recovery of both face and pose using the presented methodology.  
  Address (down) San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference AMFG  
  Notes MILAB;HUPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ HRE2010 Serial 1362  
Permanent link to this record
 

 
Author Petia Radeva; M. Scoccianti edit  openurl
  Title 3D Reconstruction of Abdominal Aortic Aneurysm Type Miscellaneous
  Year 2000 Publication Elsevier Science B.V., Ed. H.U. Lemke, M.W. Vannier, K. Inamura, A.G.Farman and K.Doi, CARS 2000, pp.1014, ISBN:0–444–50536–9 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) San Francisco, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ RaS2000 Serial 439  
Permanent link to this record
 

 
Author Sandra Jimenez; Xavier Otazu; Valero Laparra; Jesus Malo edit   pdf
doi  openurl
  Title Chromatic induction and contrast masking: similar models, different goals? Type Conference Article
  Year 2013 Publication Human Vision and Electronic Imaging XVIII Abbreviated Journal  
  Volume 8651 Issue Pages  
  Keywords  
  Abstract Normalization of signals coming from linear sensors is an ubiquitous mechanism of neural adaptation.1 Local interaction between sensors tuned to a particular feature at certain spatial position and neighbor sensors explains a wide range of psychophysical facts including (1) masking of spatial patterns, (2) non-linearities of motion sensors, (3) adaptation of color perception, (4) brightness and chromatic induction, and (5) image quality assessment. Although the above models have formal and qualitative similarities, it does not necessarily mean that the mechanisms involved are pursuing the same statistical goal. For instance, in the case of chromatic mechanisms (disregarding spatial information), different parameters in the normalization give rise to optimal discrimination or adaptation, and different non-linearities may give rise to error minimization or component independence. In the case of spatial sensors (disregarding color information), a number of studies have pointed out the benefits of masking in statistical independence terms. However, such statistical analysis has not been performed for spatio-chromatic induction models where chromatic perception depends on spatial configuration. In this work we investigate whether successful spatio-chromatic induction models,6 increase component independence similarly as previously reported for masking models. Mutual information analysis suggests that seeking an efficient chromatic representation may explain the prevalence of induction effects in spatially simple images. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.  
  Address (down) San Francisco CA; USA; February 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference HVEI  
  Notes CIC Approved no  
  Call Number Admin @ si @ JOL2013 Serial 2240  
Permanent link to this record
 

 
Author Xinhang Song; Luis Herranz; Shuqiang Jiang edit   pdf
openurl 
  Title Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs Type Conference Article
  Year 2017 Publication 31st AAAI Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords RGB-D scene recognition; weakly supervised; fine tune; CNN  
  Abstract Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.  
  Address (down) San Francisco CA; February 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AAAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SHJ2017 Serial 2967  
Permanent link to this record
 

 
Author Mario Rojas; David Masip; A. Todorov; Jordi Vitria edit  doi
isbn  openurl
  Title Automatic Point-based Facial Trait Judgments Evaluation Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2715–2720  
  Keywords  
  Abstract Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits.  
  Address (down) San Francisco CA, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RMT2010 Serial 1282  
Permanent link to this record
 

 
Author Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez edit  url
doi  isbn
openurl 
  Title Harmony Potentials for Joint Classification and Segmentation Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3280–3287  
  Keywords  
  Abstract Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21.  
  Address (down) San Francisco CA, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;CIC;ISE Approved no  
  Call Number ADAS @ adas @ GBW2010 Serial 1296  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: