toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Enric Marti; Jordi Regincos;Jaime Lopez-Krahe; Juan J.Villanueva edit  url
doi  openurl
  Title Hand line drawing interpretation as three-dimensional objects Type Journal Article
  Year 1993 Publication (up) Signal Processing – Intelligent systems for signal and image understanding Abbreviated Journal  
  Volume 32 Issue 1-2 Pages 91-110  
  Keywords Line drawing interpretation; line labelling; scene analysis; man-machine interaction; CAD input; line extraction  
  Abstract In this paper we present a technique to interpret hand line drawings as objects in a three-dimensional space. The object domain considered is based on planar surfaces with straight edges, concretely, on ansextension of Origami world to hidden lines. The line drawing represents the object under orthographic projection and it is sensed using a scanner. Our method is structured in two modules: feature extraction and feature interpretation. In the first one, image processing techniques are applied under certain tolerance margins to detect lines and junctions on the hand line drawing. Feature interpretation module is founded on line labelling techniques using a labelled junction dictionary. A labelling algorithm is here proposed. It uses relaxation techniques to reduce the number of incompatible labels with the junction dictionary so that the convergence of solutions can be accelerated. We formulate some labelling hypotheses tending to eliminate elements in two sets of labelled interpretations. That is, those which are compatible with the dictionary but do not correspond to three-dimensional objects and those which represent objects not very probable to be specified by means of a line drawing. New entities arise on the line drawing as a result of the extension of Origami world. These are defined to enunciate the assumptions of our method as well as to clarify the algorithms proposed. This technique is framed in a project aimed to implement a system to create 3D objects to improve man-machine interaction in CAD systems.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier North-Holland, Inc. Place of Publication Amsterdam, The Netherlands, The Netherlands Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0165-1684 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ISE; Approved no  
  Call Number IAM @ iam @ MRL1993 Serial 1611  
Permanent link to this record
 

 
Author Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez edit  openurl
  Title Interpretation of Complex Situations in a Semantic-based Surveillance Framework Type Journal
  Year 2008 Publication (up) Signal Processing: Image Communication, Special Issue on Semantic Analysis for Interactive Multimedia Services Abbreviated Journal  
  Volume 23 Issue 7 Pages 554-569  
  Keywords Cognitive vision system; Situation analysis; Applied ontologies  
  Abstract The integration of cognitive capabilities in computer vision systems requires both to enable high semantic expressiveness and to deal with high computational costs as large amounts of data are involved in the analysis. This contribution describes a cognitive vision system conceived to automatically provide high-level interpretations of complex real-time situations in outdoor and indoor scenarios, and to eventually maintain communication with casual end users in multiple languages. The main contributions are: (i) the design of an integrative multilevel architecture for cognitive surveillance purposes; (ii) the proposal of a coherent taxonomy of knowledge to guide the process of interpretation, which leads to the conception of a situation-based ontology; (iii) the use of situational analysis for content detection and a progressive interpretation of semantically rich scenes, by managing incomplete or uncertain knowledge, and (iv) the use of such an ontological background to enable multilingual capabilities and advanced end-user interfaces. Experimental results are provided to show the feasibility of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number ISE @ ise @ FBR2008 Serial 954  
Permanent link to this record
 

 
Author Diego Velazquez; Pau Rodriguez; Alexandre Lacoste; Issam H. Laradji; Xavier Roca; Jordi Gonzalez edit  url
openurl 
  Title Evaluating Counterfactual Explainers Type Journal
  Year 2023 Publication (up) Transactions on Machine Learning Research Abbreviated Journal TMLR  
  Volume Issue Pages  
  Keywords Explainability; Counterfactuals; XAI  
  Abstract Explainability methods have been widely used to provide insight into the decisions made by statistical models, thus facilitating their adoption in various domains within the industry. Counterfactual explanation methods aim to improve our understanding of a model by perturbing samples in a way that would alter its response in an unexpected manner. This information is helpful for users and for machine learning practitioners to understand and improve their models. Given the value provided by counterfactual explanations, there is a growing interest in the research community to investigate and propose new methods. However, we identify two issues that could hinder the progress in this field. (1) Existing metrics do not accurately reflect the value of an explainability method for the users. (2) Comparisons between methods are usually performed with datasets like CelebA, where images are annotated with attributes that do not fully describe them and with subjective attributes such as ``Attractive''. In this work, we address these problems by proposing an evaluation method with a principled metric to evaluate and compare different counterfactual explanation methods. The evaluation method is based on a synthetic dataset where images are fully described by their annotated attributes. As a result, we are able to perform a fair comparison of multiple explainability methods in the recent literature, obtaining insights about their performance. We make the code public for the benefit of the research community.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ VRL2023 Serial 3891  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: