Records |
Author |
Egils Avots; M. Daneshmanda; Andres Traumann; Sergio Escalera; G. Anbarjafaria |
Title |
Automatic garment retexturing based on infrared information |
Type |
Journal Article |
Year |
2016 |
Publication |
Computers & Graphics |
Abbreviated Journal |
CG |
Volume |
59 |
Issue |
|
Pages |
28-38 |
Keywords |
Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading |
Abstract |
This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA;MILAB; |
Approved |
no |
Call Number |
Admin @ si @ ADT2016 |
Serial |
2759 |
Permanent link to this record |
|
|
|
Author |
Simone Balocco; Maria Zuluaga; Guillaume Zahnd; Su-Lin Lee; Stefanie Demirci |
Title |
Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting |
Type |
Book Whole |
Year |
2016 |
Publication |
Computing and Visualization for Intravascular Imaging and Computer-Assisted Stenting |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
9780128110188 |
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
Admin @ si @ BZZ2016 |
Serial |
2821 |
Permanent link to this record |
|
|
|
Author |
Pau Riba; Lutz Goldmann; Oriol Ramos Terrades; Diede Rusticus; Alicia Fornes; Josep Llados |
Title |
Table detection in business document images by message passing networks |
Type |
Journal Article |
Year |
2022 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
Volume |
127 |
Issue |
|
Pages |
108641 |
Keywords |
|
Abstract |
Tabular structures in business documents offer a complementary dimension to the raw textual data. For instance, there is information about the relationships among pieces of information. Nowadays, digital mailroom applications have become a key service for workflow automation. Therefore, the detection and interpretation of tables is crucial. With the recent advances in information extraction, table detection and recognition has gained interest in document image analysis, in particular, with the absence of rule lines and unknown information about rows and columns. However, business documents usually contain sensitive contents limiting the amount of public benchmarking datasets. In this paper, we propose a graph-based approach for detecting tables in document images which do not require the raw content of the document. Hence, the sensitive content can be previously removed and, instead of using the raw image or textual content, we propose a purely structural approach to keep sensitive data anonymous. Our framework uses graph neural networks (GNNs) to describe the local repetitive structures that constitute a table. In particular, our main application domain are business documents. We have carefully validated our approach in two invoice datasets and a modern document benchmark. Our experiments demonstrate that tables can be detected by purely structural approaches. |
Address |
July 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG; 600.162; 600.121 |
Approved |
no |
Call Number |
Admin @ si @ RGR2022 |
Serial |
3729 |
Permanent link to this record |
|
|
|
Author |
Patricia Suarez; Angel Sappa; Boris X. Vintimilla |
Title |
Deep learning-based vegetation index estimation |
Type |
Book Chapter |
Year |
2021 |
Publication |
Generative Adversarial Networks for Image-to-Image Translation |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
205-234 |
Keywords |
|
Abstract |
Chapter 9 |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
A.Solanki; A.Nayyar; M.Naved |
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MSIAU; 600.122 |
Approved |
no |
Call Number |
Admin @ si @ SSV2021a |
Serial |
3578 |
Permanent link to this record |
|
|
|
Author |
Juan Borrego-Carazo; Carles Sanchez; David Castells; Jordi Carrabina; Debora Gil |
Title |
BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation |
Type |
Journal Article |
Year |
2023 |
Publication |
Computer Methods and Programs in Biomedicine |
Abbreviated Journal |
CMPB |
Volume |
228 |
Issue |
|
Pages |
107241 |
Keywords |
Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation |
Abstract |
Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; |
Approved |
no |
Call Number |
Admin @ si @ BSC2023 |
Serial |
3702 |
Permanent link to this record |
|
|
|
Author |
Mohamed Ali Souibgui; Alicia Fornes; Yousri Kessentini; Beata Megyesi |
Title |
Few shots are all you need: A progressive learning approach for low resource handwritten text recognition |
Type |
Journal Article |
Year |
2022 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
160 |
Issue |
|
Pages |
43-49 |
Keywords |
|
Abstract |
Handwritten text recognition in low resource scenarios, such as manuscripts with rare alphabets, is a challenging problem. In this paper, we propose a few-shot learning-based handwriting recognition approach that significantly reduces the human annotation process, by requiring only a few images of each alphabet symbols. The method consists of detecting all the symbols of a given alphabet in a textline image and decoding the obtained similarity scores to the final sequence of transcribed symbols. Our model is first pretrained on synthetic line images generated from an alphabet, which could differ from the alphabet of the target domain. A second training step is then applied to reduce the gap between the source and the target data. Since this retraining would require annotation of thousands of handwritten symbols together with their bounding boxes, we propose to avoid such human effort through an unsupervised progressive learning approach that automatically assigns pseudo-labels to the unlabeled data. The evaluation on different datasets shows that our model can lead to competitive results with a significant reduction in human effort. The code will be publicly available in the following repository: https://github.com/dali92002/HTRbyMatching |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG; 600.121; 600.162; 602.230 |
Approved |
no |
Call Number |
Admin @ si @ SFK2022 |
Serial |
3736 |
Permanent link to this record |
|
|
|
Author |
Yecong Wan; Yuanshuo Cheng; Miingwen Shao; Jordi Gonzalez |
Title |
Image rain removal and illumination enhancement done in one go |
Type |
Journal Article |
Year |
2022 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
KBS |
Volume |
252 |
Issue |
|
Pages |
109244 |
Keywords |
|
Abstract |
Rain removal plays an important role in the restoration of degraded images. Recently, CNN-based methods have achieved remarkable success. However, these approaches neglect that the appearance of real-world rain is often accompanied by low light conditions, which will further degrade the image quality, thereby hindering the restoration mission. Therefore, it is very indispensable to jointly remove the rain and enhance illumination for real-world rain image restoration. To this end, we proposed a novel spatially-adaptive network, dubbed SANet, which can remove the rain and enhance illumination in one go with the guidance of degradation mask. Meanwhile, to fully utilize negative samples, a contrastive loss is proposed to preserve more natural textures and consistent illumination. In addition, we present a new synthetic dataset, named DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates real-world rainfall scenarios. SANet is extensively evaluated on the proposed dataset and attains new state-of-the-art performance against other combining methods. Moreover, after a simple transformation, our SANet surpasses existing the state-of-the-art algorithms in both rain removal and low-light image enhancement. |
Address |
Sept 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ISE; 600.157; 600.168 |
Approved |
no |
Call Number |
Admin @ si @ WCS2022 |
Serial |
3744 |
Permanent link to this record |
|
|
|
Author |
David Sanchez-Mendoza; David Masip; Agata Lapedriza |
Title |
Emotion recognition from mid-level features |
Type |
Journal Article |
Year |
2015 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
67 |
Issue |
Part 1 |
Pages |
66–74 |
Keywords |
Facial expression; Emotion recognition; Action units; Computer vision |
Abstract |
In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0167-8655 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
Admin @ si @ SML2015 |
Serial |
2746 |
Permanent link to this record |
|
|
|
Author |
Pedro Martins; Paulo Carvalho; Carlo Gatta |
Title |
On the completeness of feature-driven maximally stable extremal regions |
Type |
Journal Article |
Year |
2016 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
74 |
Issue |
|
Pages |
9-16 |
Keywords |
Local features; Completeness; Maximally Stable Extremal Regions |
Abstract |
By definition, local image features provide a compact representation of the image in which most of the image information is preserved. This capability offered by local features has been overlooked, despite being relevant in many application scenarios. In this paper, we analyze and discuss the performance of feature-driven Maximally Stable Extremal Regions (MSER) in terms of the coverage of informative image parts (completeness). This type of features results from an MSER extraction on saliency maps in which features related to objects boundaries or even symmetry axes are highlighted. These maps are intended to be suitable domains for MSER detection, allowing this detector to provide a better coverage of informative image parts. Our experimental results, which were based on a large-scale evaluation, show that feature-driven MSER have relatively high completeness values and provide more complete sets than a traditional MSER detection even when sets of similar cardinality are considered. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0167-8655 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
LAMP;MILAB; |
Approved |
no |
Call Number |
Admin @ si @ MCG2016 |
Serial |
2748 |
Permanent link to this record |
|
|
|
Author |
Gerard Canal; Sergio Escalera; Cecilio Angulo |
Title |
A Real-time Human-Robot Interaction system based on gestures for assistive scenarios |
Type |
Journal Article |
Year |
2016 |
Publication |
Computer Vision and Image Understanding |
Abbreviated Journal |
CVIU |
Volume |
149 |
Issue |
|
Pages |
65-77 |
Keywords |
Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation |
Abstract |
Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA;MILAB; |
Approved |
no |
Call Number |
Admin @ si @ CEA2016 |
Serial |
2768 |
Permanent link to this record |
|
|
|
Author |
Debora Gil; Sergio Vera; Agnes Borras; Albert Andaluz; Miguel Angel Gonzalez Ballester |
Title |
Anatomical Medial Surfaces with Efficient Resolution of Branches Singularities |
Type |
Journal Article |
Year |
2017 |
Publication |
Medical Image Analysis |
Abbreviated Journal |
MIA |
Volume |
35 |
Issue |
|
Pages |
390-402 |
Keywords |
Medial Representations; Shape Recognition; Medial Branching Stability ; Singular Points |
Abstract |
Medial surfaces are powerful tools for shape description, but their use has been limited due to the sensibility existing methods to branching artifacts. Medial branching artifacts are associated to perturbations of the object boundary rather than to geometric features. Such instability is a main obstacle for a condent application in shape recognition and description. Medial branches correspond to singularities of the medial surface and, thus, they are problematic for existing morphological and energy-based algorithms. In this paper, we use algebraic geometry concepts in an energy-based approach to compute a medial surface presenting a stable branching topology. We also present an ecient GPU-CPU implementation using standard image processing tools. We show the method computational eciency and quality on a custom made synthetic database. Finally, we present some results on a medical imaging application for localization of abdominal pathologies. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; 600.060; 600.096; 600.075; 600.145 |
Approved |
no |
Call Number |
Admin @ si @ GVB2017 |
Serial |
2775 |
Permanent link to this record |
|
|
|
Author |
Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira |
Title |
Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives |
Type |
Journal Article |
Year |
2016 |
Publication |
Robotics and Autonomous Systems |
Abbreviated Journal |
RAS |
Volume |
83 |
Issue |
|
Pages |
312-325 |
Keywords |
Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives |
Abstract |
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS; 600.086, 600.076 |
Approved |
no |
Call Number |
Admin @ si @OSS2016a |
Serial |
2806 |
Permanent link to this record |
|
|
|
Author |
Angel Sappa; Cristhian A. Aguilera-Carrasco; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo |
Title |
Monocular visual odometry: A cross-spectral image fusion based approach |
Type |
Journal Article |
Year |
2016 |
Publication |
Robotics and Autonomous Systems |
Abbreviated Journal |
RAS |
Volume |
85 |
Issue |
|
Pages |
26-36 |
Keywords |
Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion |
Abstract |
This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS;600.086; 600.076 |
Approved |
no |
Call Number |
Admin @ si @SAC2016 |
Serial |
2811 |
Permanent link to this record |
|
|
|
Author |
Enric Marti; Jordi Regincos;Jaime Lopez-Krahe; Juan J.Villanueva |
Title |
Hand line drawing interpretation as three-dimensional objects |
Type |
Journal Article |
Year |
1993 |
Publication |
Signal Processing – Intelligent systems for signal and image understanding |
Abbreviated Journal |
|
Volume |
32 |
Issue |
1-2 |
Pages |
91-110 |
Keywords |
Line drawing interpretation; line labelling; scene analysis; man-machine interaction; CAD input; line extraction |
Abstract |
In this paper we present a technique to interpret hand line drawings as objects in a three-dimensional space. The object domain considered is based on planar surfaces with straight edges, concretely, on ansextension of Origami world to hidden lines. The line drawing represents the object under orthographic projection and it is sensed using a scanner. Our method is structured in two modules: feature extraction and feature interpretation. In the first one, image processing techniques are applied under certain tolerance margins to detect lines and junctions on the hand line drawing. Feature interpretation module is founded on line labelling techniques using a labelled junction dictionary. A labelling algorithm is here proposed. It uses relaxation techniques to reduce the number of incompatible labels with the junction dictionary so that the convergence of solutions can be accelerated. We formulate some labelling hypotheses tending to eliminate elements in two sets of labelled interpretations. That is, those which are compatible with the dictionary but do not correspond to three-dimensional objects and those which represent objects not very probable to be specified by means of a line drawing. New entities arise on the line drawing as a result of the extension of Origami world. These are defined to enunciate the assumptions of our method as well as to clarify the algorithms proposed. This technique is framed in a project aimed to implement a system to create 3D objects to improve man-machine interaction in CAD systems. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier North-Holland, Inc. |
Place of Publication |
Amsterdam, The Netherlands, The Netherlands |
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0165-1684 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM;ISE; |
Approved |
no |
Call Number |
IAM @ iam @ MRL1993 |
Serial |
1611 |
Permanent link to this record |
|
|
|
Author |
Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez |
Title |
Exploiting Natural Language Generation in Scene Interpretation |
Type |
Book Chapter |
Year |
2009 |
Publication |
Human–Centric Interfaces for Ambient Intelligence |
Abbreviated Journal |
|
Volume |
4 |
Issue |
|
Pages |
71–93 |
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier Science and Tech |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ISE |
Approved |
no |
Call Number |
ISE @ ise @ FBR2009 |
Serial |
1212 |
Permanent link to this record |