toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika edit  openurl
  Title Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics Type Conference Article
  Year 2014 Publication 1st Workshop on Computer Vision for Affective Computing Abbreviated Journal  
  Volume Issue Pages 1-8  
  Keywords  
  Abstract Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
 
  Address Singapore; November 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACCV  
  Notes LAMP; Approved no  
  Call Number Admin @ si @ RBD2014 Serial 2599  
Permanent link to this record
 

 
Author Marc Oliu; Ciprian Corneanu; Laszlo A. Jeni; Jeffrey F. Cohn; Takeo Kanade; Sergio Escalera edit   pdf
openurl 
  Title Continuous Supervised Descent Method for Facial Landmark Localisation Type Conference Article
  Year 2016 Publication 13th Asian Conference on Computer Vision Abbreviated Journal  
  Volume 10112 Issue Pages 121-135  
  Keywords  
  Abstract Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size.  
  Address Taipei; Taiwan; November 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACCV  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ OCJ2016 Serial 2838  
Permanent link to this record
 

 
Author Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados edit   pdf
openurl 
  Title Aligning Salient Objects to Queries: A Multi-modal and Multi-object Image Retrieval Framework Type Conference Article
  Year 2018 Publication 14th Asian Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model learned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every dataset.  
  Address Perth; Australia; December 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACCV  
  Notes DAG; 600.097; 600.121; 600.129 Approved no  
  Call Number Admin @ si @ DDG2018a Serial 3151  
Permanent link to this record
 

 
Author Antonio Lopez; J. Hilgenstock; A. Busse; Ramon Baldrich; Felipe Lumbreras; Joan Serrat edit   pdf
openurl 
  Title Nightime Vehicle Detecion for Intelligent Headlight Control Type Conference Article
  Year 2008 Publication Advanced Concepts for Intelligent Vision Systems, 10th International Conference, Proceedings, Abbreviated Journal  
  Volume 5259 Issue Pages 113–124  
  Keywords Intelligent Headlights; vehicle detection  
  Abstract  
  Address Juan-les-Pins, France  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACIVS  
  Notes ADAS;CIC Approved no  
  Call Number ADAS @ adas @ LHB2008a Serial 1098  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit  doi
isbn  openurl
  Title A Novel Approach to Geometric Fitting of Implicit Quadrics Type Conference Article
  Year 2009 Publication 8th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal  
  Volume 5807 Issue Pages 121–132  
  Keywords  
  Abstract This paper presents a novel approach for estimating the geometric distance from a given point to the corresponding implicit quadric curve/surface. The proposed estimation is based on the height of a tetrahedron, which is used as a coarse but reliable estimation of the real distance. The estimated distance is then used for finding the best set of quadric parameters, by means of the Levenberg-Marquardt algorithm, which is a common framework in other geometric fitting approaches. Comparisons of the proposed approach with previous ones are provided to show both improvements in CPU time as well as in the accuracy of the obtained results.  
  Address Bordeaux, France  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-04696-4 Medium  
  Area Expedition Conference (up) ACIVS  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RoS2009 Serial 1194  
Permanent link to this record
 

 
Author Cesar Isaza; Joaquin Salas; Bogdan Raducanu edit  doi
isbn  openurl
  Title Toward the Detection of Urban Infrastructures Edge Shadows Type Conference Article
  Year 2010 Publication 12th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal  
  Volume 6474 Issue I Pages 30–37  
  Keywords  
  Abstract In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising.  
  Address Sydney, Australia  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor eds. Blanc–Talon et al  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-17687-6 Medium  
  Area Expedition Conference (up) ACIVS  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ ISR2010 Serial 1458  
Permanent link to this record
 

 
Author David Geronimo; Frederic Lerasle; Antonio Lopez edit   pdf
openurl 
  Title State-driven particle filter for multi-person tracking Type Conference Article
  Year 2012 Publication 11th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal  
  Volume 7517 Issue Pages 467-478  
  Keywords human tracking  
  Abstract Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test
it in public video sequences.
 
  Address Brno, Chzech Republic  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Heidelberg Editor J. Blanc-Talon et al.  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACIVS  
  Notes ADAS Approved yes  
  Call Number GLL2012; ADAS @ adas @ gll2012a Serial 1990  
Permanent link to this record
 

 
Author Yainuvis Socarras; David Vazquez; Antonio Lopez; David Geronimo; Theo Gevers edit   pdf
doi  isbn
openurl 
  Title Improving HOG with Image Segmentation: Application to Human Detection Type Conference Article
  Year 2012 Publication 11th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal  
  Volume 7517 Issue Pages 178-189  
  Keywords Segmentation; Pedestrian Detection  
  Abstract In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function.
 
  Address Brno, Czech Republic  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor J. Blanc-Talon et al.  
  Language English Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-33139-8 Medium  
  Area Expedition Conference (up) ACIVS  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ SLV2012 Serial 1980  
Permanent link to this record
 

 
Author Dennis G.Romero; Anselmo Frizera; Angel Sappa; Boris X. Vintimilla; Teodiano F.Bastos edit   pdf
url  doi
isbn  openurl
  Title A predictive model for human activity recognition by observing actions and context Type Conference Article
  Year 2015 Publication Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 Abbreviated Journal  
  Volume 9386 Issue Pages 323-333  
  Keywords  
  Abstract This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.  
  Address Catania; Italy; October 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-25902-4 Medium  
  Area Expedition Conference (up) ACIVS  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ RFS2015 Serial 2661  
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan edit  doi
openurl 
  Title Interactive layout analysis and transcription systems for historic handwritten documents Type Conference Article
  Year 2010 Publication 10th ACM Symposium on Document Engineering Abbreviated Journal  
  Volume Issue Pages 219–222  
  Keywords Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis  
  Abstract The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process.  
  Address Manchester, United Kingdom  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACM  
  Notes DAG Approved no  
  Call Number Admin @ si @RTS2010 Serial 1857  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition Type Conference Article
  Year 2017 Publication 19th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages  
  Keywords Convolutional Neural Networks; Texture Recognition; Local Binary Paterns  
  Abstract Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets.  
  Address Glasgow; Scothland; November 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACM  
  Notes LAMP; 600.109; 600.068; 600.120 Approved no  
  Call Number Admin @ si @ RKW2017 Serial 3038  
Permanent link to this record
 

 
Author Xinhang Song; Haitao Zeng; Sixian Zhang; Luis Herranz; Shuqiang Jiang edit  url
openurl 
  Title Generalized Zero-shot Learning with Multi-source Semantic Embeddings for Scene Recognition Type Conference Article
  Year 2020 Publication 28th ACM International Conference on Multimedia Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Recognizing visual categories from semantic descriptions is a promising way to extend the capability of a visual classifier beyond the concepts represented in the training data (i.e. seen categories). This problem is addressed by (generalized) zero-shot learning methods (GZSL), which leverage semantic descriptions that connect them to seen categories (e.g. label embedding, attributes). Conventional GZSL are designed mostly for object recognition. In this paper we focus on zero-shot scene recognition, a more challenging setting with hundreds of categories where their differences can be subtle and often localized in certain objects or regions. Conventional GZSL representations are not rich enough to capture these local discriminative differences. Addressing these limitations, we propose a feature generation framework with two novel components: 1) multiple sources of semantic information (i.e. attributes, word embeddings and descriptions), 2) region descriptions that can enhance scene discrimination. To generate synthetic visual features we propose a two-step generative approach, where local descriptions are sampled and used as conditions to generate visual features. The generated features are then aggregated and used together with real features to train a joint classifier. In order to evaluate the proposed method, we introduce a new dataset for zero-shot scene recognition with multi-semantic annotations. Experimental results on the proposed dataset and SUN Attribute dataset illustrate the effectiveness of the proposed method.  
  Address Virtual; October 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACM  
  Notes LAMP; 600.141; 600.120 Approved no  
  Call Number Admin @ si @ SZZ2020 Serial 3465  
Permanent link to this record
 

 
Author Raul Gomez; Yahui Liu; Marco de Nadai; Dimosthenis Karatzas; Bruno Lepri; Nicu Sebe edit   pdf
url  openurl
  Title Retrieval Guided Unsupervised Multi-domain Image to Image Translation Type Conference Article
  Year 2020 Publication 28th ACM International Conference on Multimedia Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACM  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ GLN2020 Serial 3497  
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva edit  doi
openurl 
  Title Active labeling application applied to food-related object recognition Type Conference Article
  Year 2013 Publication 5th International Workshop on Multimedia for Cooking & Eating Activities Abbreviated Journal  
  Volume Issue Pages 45-50  
  Keywords  
  Abstract Every day, lifelogging devices, available for recording different aspects of our daily life, increase in number, quality and functions, just like the multiple applications that we give to them. Applying wearable devices to analyse the nutritional habits of people is a challenging application based on acquiring and analyzing life records in long periods of time. However, to extract the information of interest related to the eating patterns of people, we need automatic methods to process large amount of life-logging data (e.g. recognition of food-related objects). Creating a rich set of manually labeled samples to train the algorithms is slow, tedious and subjective. To address this problem, we propose a novel method in the framework of Active Labeling for construct- ing a training set of thousands of images. Inspired by the hierarchical sampling method for active learning [6], we propose an Active forest that organizes hierarchically the data for easy and fast labeling. Moreover, introducing a classifier into the hierarchical structures, as well as transforming the feature space for better data clustering, additionally im- prove the algorithm. Our method is successfully tested to label 89.700 food-related objects and achieves significant reduction in expert time labelling.

Active labeling application applied to food-related object recognition ResearchGate. Available from: http://www.researchgate.net/publication/262252017Activelabelingapplicationappliedtofood-relatedobjectrecognition [accessed Jul 14, 2015].
 
  Address Barcelona; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up) ACM-CEA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ BGR2013b Serial 2637  
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades edit  doi
isbn  openurl
  Title Document noise removal using sparse representations over learned dictionary Type Conference Article
  Year 2013 Publication Symposium on Document engineering Abbreviated Journal  
  Volume Issue Pages 161-168  
  Keywords  
  Abstract best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art.
 
  Address Barcelona; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-1789-4 Medium  
  Area Expedition Conference (up) ACM-DocEng  
  Notes DAG; 600.061 Approved no  
  Call Number Admin @ si @ DTR2013a Serial 2330  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: