|
Records |
Links |
|
Author |
Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Facial expression recognition using tracked facial actions: Classifier performance analysis |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Engineering Applications of Artificial Intelligence |
Abbreviated Journal |
EAAI |
|
|
Volume |
26 |
Issue |
1 |
Pages |
467-477 |
|
|
Keywords |
Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction |
|
|
Abstract |
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMR2013 |
Serial |
2185 |
|
Permanent link to this record |
|
|
|
|
Author |
David Roche; Debora Gil; Jesus Giraldo |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Multiple active receptor conformation, agonist efficacy and maximum effect of the system: the conformation-based operational model of agonism, |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Drug Discovery Today |
Abbreviated Journal |
DDT |
|
|
Volume |
18 |
Issue |
7-8 |
Pages |
365-371 |
|
|
Keywords |
|
|
|
Abstract |
The operational model of agonism assumes that the maximum effect a particular receptor system can achieve (the Em parameter) is fixed. Em estimates are above but close to the asymptotic maximum effects of endogenous agonists. The concept of Em is contradicted by superagonists and those positive allosteric modulators that significantly increase the maximum effect of endogenous agonists. An extension of the operational model is proposed that assumes that the Em parameter does not necessarily have a single value for a receptor system but has multiple values associated to multiple active receptor conformations. The model provides a mechanistic link between active receptor conformation and agonist efficacy, which can be useful for the analysis of agonist response under different receptor scenarios. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.057; 600.054 |
Approved |
no |
|
|
Call Number |
IAM @ iam @ RGG2013a |
Serial |
2190 |
|
Permanent link to this record |
|
|
|
|
Author |
Ferran Poveda; Debora Gil; Enric Marti; Albert Andaluz; Manel Ballester;Francesc Carreras Costa |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Helical structure of the cardiac ventricular anatomy assessed by Diffusion Tensor Magnetic Resonance Imaging multi-resolution tractography |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Revista Española de Cardiología |
Abbreviated Journal |
REC |
|
|
Volume |
66 |
Issue |
10 |
Pages |
782-790 |
|
|
Keywords |
Heart;Diffusion magnetic resonance imaging;Diffusion tractography;Helical heart;Myocardial ventricular band. |
|
|
Abstract |
Deep understanding of myocardial structure linking morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen in this knowledge through advanced computer graphic representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging (DT-MRI).
We performed automatic tractography reconstruction of unsegmented DT-MRI canine heart datasets coming from the public database of the Johns Hopkins University. Full scale tractographies have been build with 200 seeds and are composed by streamlines computed on the vectorial field of primary eigenvectors given at the diffusion tensor volumes. Also, we introduced a novel multi-scale visualization technique in order to obtain a simplified tractography. This methodology allowed to keep the main geometric features of the fiber tracts, making easier to decipher the main properties of the architectural organization of the heart.
On the analysis of the output from our tractographic representations we found exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array.
Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3D levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by Torrent-Guasp. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.044; 600.060 |
Approved |
no |
|
|
Call Number |
IAM @ iam @ PGM2013 |
Serial |
2194 |
|
Permanent link to this record |
|
|
|
|
Author |
Egils Avots; M. Daneshmanda; Andres Traumann; Sergio Escalera; G. Anbarjafaria |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Automatic garment retexturing based on infrared information |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Computers & Graphics |
Abbreviated Journal |
CG |
|
|
Volume |
59 |
Issue |
|
Pages |
28-38 |
|
|
Keywords |
Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading |
|
|
Abstract |
This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ ADT2016 |
Serial |
2759 |
|
Permanent link to this record |
|
|
|
|
Author |
Simone Balocco; Maria Zuluaga; Guillaume Zahnd; Su-Lin Lee; Stefanie Demirci |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting |
Type |
Book Whole |
|
Year |
2016 |
Publication |
Computing and Visualization for Intravascular Imaging and Computer-Assisted Stenting |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
9780128110188 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BZZ2016 |
Serial |
2821 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Riba; Lutz Goldmann; Oriol Ramos Terrades; Diede Rusticus; Alicia Fornes; Josep Llados |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Table detection in business document images by message passing networks |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
127 |
Issue |
|
Pages |
108641 |
|
|
Keywords |
|
|
|
Abstract |
Tabular structures in business documents offer a complementary dimension to the raw textual data. For instance, there is information about the relationships among pieces of information. Nowadays, digital mailroom applications have become a key service for workflow automation. Therefore, the detection and interpretation of tables is crucial. With the recent advances in information extraction, table detection and recognition has gained interest in document image analysis, in particular, with the absence of rule lines and unknown information about rows and columns. However, business documents usually contain sensitive contents limiting the amount of public benchmarking datasets. In this paper, we propose a graph-based approach for detecting tables in document images which do not require the raw content of the document. Hence, the sensitive content can be previously removed and, instead of using the raw image or textual content, we propose a purely structural approach to keep sensitive data anonymous. Our framework uses graph neural networks (GNNs) to describe the local repetitive structures that constitute a table. In particular, our main application domain are business documents. We have carefully validated our approach in two invoice datasets and a modern document benchmark. Our experiments demonstrate that tables can be detected by purely structural approaches. |
|
|
Address |
July 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.162; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGR2022 |
Serial |
3729 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Suarez; Angel Sappa; Boris X. Vintimilla |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Deep learning-based vegetation index estimation |
Type |
Book Chapter |
|
Year |
2021 |
Publication |
Generative Adversarial Networks for Image-to-Image Translation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
205-234 |
|
|
Keywords |
|
|
|
Abstract |
Chapter 9 |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
A.Solanki; A.Nayyar; M.Naved |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SSV2021a |
Serial |
3578 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Borrego-Carazo; Carles Sanchez; David Castells; Jordi Carrabina; Debora Gil |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation |
Type |
Journal Article |
|
Year |
2023 |
Publication |
Computer Methods and Programs in Biomedicine |
Abbreviated Journal |
CMPB |
|
|
Volume |
228 |
Issue |
|
Pages |
107241 |
|
|
Keywords |
Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation |
|
|
Abstract |
Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; |
Approved |
no |
|
|
Call Number |
Admin @ si @ BSC2023 |
Serial |
3702 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohamed Ali Souibgui; Alicia Fornes; Yousri Kessentini; Beata Megyesi |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Few shots are all you need: A progressive learning approach for low resource handwritten text recognition |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
160 |
Issue |
|
Pages |
43-49 |
|
|
Keywords |
|
|
|
Abstract |
Handwritten text recognition in low resource scenarios, such as manuscripts with rare alphabets, is a challenging problem. In this paper, we propose a few-shot learning-based handwriting recognition approach that significantly reduces the human annotation process, by requiring only a few images of each alphabet symbols. The method consists of detecting all the symbols of a given alphabet in a textline image and decoding the obtained similarity scores to the final sequence of transcribed symbols. Our model is first pretrained on synthetic line images generated from an alphabet, which could differ from the alphabet of the target domain. A second training step is then applied to reduce the gap between the source and the target data. Since this retraining would require annotation of thousands of handwritten symbols together with their bounding boxes, we propose to avoid such human effort through an unsupervised progressive learning approach that automatically assigns pseudo-labels to the unlabeled data. The evaluation on different datasets shows that our model can lead to competitive results with a significant reduction in human effort. The code will be publicly available in the following repository: https://github.com/dali92002/HTRbyMatching |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.121; 600.162; 602.230 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SFK2022 |
Serial |
3736 |
|
Permanent link to this record |
|
|
|
|
Author |
Yecong Wan; Yuanshuo Cheng; Miingwen Shao; Jordi Gonzalez |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Image rain removal and illumination enhancement done in one go |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Knowledge-Based Systems |
Abbreviated Journal |
KBS |
|
|
Volume |
252 |
Issue |
|
Pages |
109244 |
|
|
Keywords |
|
|
|
Abstract |
Rain removal plays an important role in the restoration of degraded images. Recently, CNN-based methods have achieved remarkable success. However, these approaches neglect that the appearance of real-world rain is often accompanied by low light conditions, which will further degrade the image quality, thereby hindering the restoration mission. Therefore, it is very indispensable to jointly remove the rain and enhance illumination for real-world rain image restoration. To this end, we proposed a novel spatially-adaptive network, dubbed SANet, which can remove the rain and enhance illumination in one go with the guidance of degradation mask. Meanwhile, to fully utilize negative samples, a contrastive loss is proposed to preserve more natural textures and consistent illumination. In addition, we present a new synthetic dataset, named DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates real-world rainfall scenarios. SANet is extensively evaluated on the proposed dataset and attains new state-of-the-art performance against other combining methods. Moreover, after a simple transformation, our SANet surpasses existing the state-of-the-art algorithms in both rain removal and low-light image enhancement. |
|
|
Address |
Sept 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.157; 600.168 |
Approved |
no |
|
|
Call Number |
Admin @ si @ WCS2022 |
Serial |
3744 |
|
Permanent link to this record |
|
|
|
|
Author |
Aymen Azaza |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Context, Motion and Semantic Information for Computational Saliency |
Type |
Book Whole |
|
Year |
2018 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The main objective of this thesis is to highlight the salient object in an image or in a video sequence. We address three important—but in our opinion
insufficiently investigated—aspects of saliency detection. Firstly, we start
by extending previous research on saliency which explicitly models the information provided from the context. Then, we show the importance of
explicit context modelling for saliency estimation. Several important works
in saliency are based on the usage of object proposals. However, these methods
focus on the saliency of the object proposal itself and ignore the context.
To introduce context in such saliency approaches, we couple every object
proposal with its direct context. This allows us to evaluate the importance
of the immediate surround (context) for its saliency. We propose several
saliency features which are computed from the context proposals including
features based on omni-directional and horizontal context continuity. Secondly,
we investigate the usage of top-downmethods (high-level semantic
information) for the task of saliency prediction since most computational
methods are bottom-up or only include few semantic classes. We propose
to consider a wider group of object classes. These objects represent important
semantic information which we will exploit in our saliency prediction
approach. Thirdly, we develop a method to detect video saliency by computing
saliency from supervoxels and optical flow. In addition, we apply the
context features developed in this thesis for video saliency detection. The
method combines shape and motion features with our proposed context
features. To summarize, we prove that extending object proposals with their
direct context improves the task of saliency detection in both image and
video data. Also the importance of the semantic information in saliency
estimation is evaluated. Finally, we propose a newmotion feature to detect
saliency in video data. The three proposed novelties are evaluated on standard
saliency benchmark datasets and are shown to improve with respect to
state-of-the-art. |
|
|
Address |
October 2018 |
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Joost Van de Weijer;Ali Douik |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-945373-9-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ Aza2018 |
Serial |
3218 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Ponsa |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Model-Based Visual Localisation of Contours and Vehicles |
Type |
Book Whole |
|
Year |
2007 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Phd Thesis |
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Antonio Lopez;Xavier Roca |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-935251-3-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ Pon2007 |
Serial |
1107 |
|
Permanent link to this record |
|
|
|
|
Author |
Robert Benavente |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Parametric Model for Computational Colour Naming |
Type |
Book Whole |
|
Year |
2007 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
PhD Thesis |
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Maria Vanrell |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
CAT @ cat @ Ben2007 |
Serial |
1108 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Baiget |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Modeling Human Behavior for Image Sequence Understanding and Generation |
Type |
Book Whole |
|
Year |
2009 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.
Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity.
This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents.
First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences.
The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries. |
|
|
Address |
Bellaterra (Spain) |
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Jordi Gonzalez;Xavier Roca |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
|
Approved |
no |
|
|
Call Number |
Admin @ si @ Bai2009 |
Serial |
1210 |
|
Permanent link to this record |
|
|
|
|
Author |
Dena Bazazian |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Fully Convolutional Networks for Text Understanding in Scene Images |
Type |
Book Whole |
|
Year |
2018 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Text understanding in scene images has gained plenty of attention in the computer vision community and it is an important task in many applications as text carries semantically rich information about scene content and context. For instance, reading text in a scene can be applied to autonomous driving, scene understanding or assisting visually impaired people. The general aim of scene text understanding is to localize and recognize text in scene images. Text regions are first localized in the original image by a trained detector model and afterwards fed into a recognition module. The tasks of localization and recognition are highly correlated since an inaccurate localization can affect the recognition task.
The main purpose of this thesis is to devise efficient methods for scene text understanding. We investigate how the latest results on deep learning can advance text understanding pipelines. Recently, Fully Convolutional Networks (FCNs) and derived methods have achieved a significant performance on semantic segmentation and pixel level classification tasks. Therefore, we took benefit of the strengths of FCN approaches in order to detect text in natural scenes. In this thesis we have focused on two challenging tasks of scene text understanding which are Text Detection and Word Spotting. For the task of text detection, we have proposed an efficient text proposal technique in scene images. We have considered the Text Proposals method as the baseline which is an approach to reduce the search space of possible text regions in an image. In order to improve the Text Proposals method we combined it with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same level of accuracy and thus gaining a significant speed up. Our experiments demonstrate that this text proposal approach yields significantly higher recall rates than the line based text localization techniques, while also producing better-quality localization. We have also applied this technique on compressed images such as videos from wearable egocentric cameras. For the task of word spotting, we have introduced a novel mid-level word representation method. We have proposed a technique to create and exploit an intermediate representation of images based on text attributes which roughly correspond to character probability maps. Our representation extends the concept of Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We call this representation the Soft-PHOC. Furthermore, we show how to use Soft-PHOC descriptors for word spotting tasks through an efficient text line proposal algorithm. To evaluate the detected text, we propose a novel line based evaluation along with the classic bounding box based approach. We test our method on incidental scene text images which comprises real-life scenarios such as urban scenes. The importance of incidental scene text images is due to the complexity of backgrounds, perspective, variety of script and language, short text and little linguistic context. All of these factors together makes the incidental scene text images challenging. |
|
|
Address |
November 2018 |
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher ![sorted by Publisher field, descending order (down)](img/sort_desc.gif) |
Ediciones Graficas Rey |
Place of Publication |
|
Editor |
Dimosthenis Karatzas;Andrew Bagdanov |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-948531-1-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ Baz2018 |
Serial |
3220 |
|
Permanent link to this record |