Home | [71–80] << 81 82 83 84 85 86 87 88 89 90 >> [91–100] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | David Geronimo; Frederic Lerasle; Antonio Lopez | ||||
Title | State-driven particle filter for multi-person tracking | Type | Conference Article | ||
Year | 2012 | Publication | 11th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 7517 | Issue | Pages | 467-478 | |
Keywords | human tracking | ||||
Abstract | Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test it in public video sequences. |
||||
Address | Brno, Chzech Republic | ||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Heidelberg | Editor | J. Blanc-Talon et al. |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
ACIVS | ||
Notes | ADAS | Approved | yes | ||
Call Number | GLL2012; ADAS @ adas @ gll2012a | Serial | 1990 | ||
Permanent link to this record | |||||
Author | Yainuvis Socarras; David Vazquez; Antonio Lopez; David Geronimo; Theo Gevers | ||||
Title | Improving HOG with Image Segmentation: Application to Human Detection | Type | Conference Article | ||
Year | 2012 | Publication | 11th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 7517 | Issue | Pages | 178-189 | |
Keywords | Segmentation; Pedestrian Detection | ||||
Abstract | In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function. |
||||
Address | Brno, Czech Republic | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | J. Blanc-Talon et al. | |
Language | English | Summary Language | Original Title | ||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-33139-8 | Medium | |
Area | Expedition | Conference ![]() |
ACIVS | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | ADAS @ adas @ SLV2012 | Serial | 1980 | ||
Permanent link to this record | |||||
Author | Dennis G.Romero; Anselmo Frizera; Angel Sappa; Boris X. Vintimilla; Teodiano F.Bastos | ||||
Title | A predictive model for human activity recognition by observing actions and context | Type | Conference Article | ||
Year | 2015 | Publication | Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 | Abbreviated Journal | |
Volume | 9386 | Issue | Pages | 323-333 | |
Keywords | |||||
Abstract | This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach. | ||||
Address | Catania; Italy; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-25902-4 | Medium | |
Area | Expedition | Conference ![]() |
ACIVS | ||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ RFS2015 | Serial | 2661 | ||
Permanent link to this record | |||||
Author | Jose Carlos Rubio; Joan Serrat; Antonio Lopez | ||||
Title | Video Co-segmentation | Type | Conference Article | ||
Year | 2012 | Publication | 11th Asian Conference on Computer Vision | Abbreviated Journal | |
Volume | 7725 | Issue | Pages | 13-24 | |
Keywords | |||||
Abstract | Segmentation of a single image is in general a highly underconstrained problem. A frequent approach to solve it is to somehow provide prior knowledge or constraints on how the objects of interest look like (in terms of their shape, size, color, location or structure). Image co-segmentation trades the need for such knowledge for something much easier to obtain, namely, additional images showing the object from other viewpoints. Now the segmentation problem is posed as one of differentiating the similar object regions in all the images from the more varying background. In this paper, for the first time, we extend this approach to video segmentation: given two or more video sequences showing the same object (or objects belonging to the same class) moving in a similar manner, we aim to outline its region in all the frames. In addition, the method works in an unsupervised manner, by learning to segment at testing time. We compare favorably with two state-of-the-art methods on video segmentation and report results on benchmark videos. | ||||
Address | Daejeon, Korea | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-37443-2 | Medium | |
Area | Expedition | Conference ![]() |
ACCV | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RSL2012d | Serial | 2153 | ||
Permanent link to this record | |||||
Author | Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika | ||||
Title | Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics | Type | Conference Article | ||
Year | 2014 | Publication | 1st Workshop on Computer Vision for Affective Computing | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression. |
||||
Address | Singapore; November 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
ACCV | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ RBD2014 | Serial | 2599 | ||
Permanent link to this record | |||||
Author | Marc Oliu; Ciprian Corneanu; Laszlo A. Jeni; Jeffrey F. Cohn; Takeo Kanade; Sergio Escalera | ||||
Title | Continuous Supervised Descent Method for Facial Landmark Localisation | Type | Conference Article | ||
Year | 2016 | Publication | 13th Asian Conference on Computer Vision | Abbreviated Journal | |
Volume | 10112 | Issue | Pages | 121-135 | |
Keywords | |||||
Abstract | Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size. | ||||
Address | Taipei; Taiwan; November 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
ACCV | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ OCJ2016 | Serial | 2838 | ||
Permanent link to this record | |||||
Author | Sounak Dey; Anjan Dutta; Suman Ghosh; Ernest Valveny; Josep Llados | ||||
Title | Aligning Salient Objects to Queries: A Multi-modal and Multi-object Image Retrieval Framework | Type | Conference Article | ||
Year | 2018 | Publication | 14th Asian Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper we propose an approach for multi-modal image retrieval in multi-labelled images. A multi-modal deep network architecture is formulated to jointly model sketches and text as input query modalities into a common embedding space, which is then further aligned with the image feature space. Our architecture also relies on a salient object detection through a supervised LSTM-based visual attention model learned from convolutional features. Both the alignment between the queries and the image and the supervision of the attention on the images are obtained by generalizing the Hungarian Algorithm using different loss functions. This permits encoding the object-based features and its alignment with the query irrespective of the availability of the co-occurrence of different objects in the training set. We validate the performance of our approach on standard single/multi-object datasets, showing state-of-the art performance in every dataset. | ||||
Address | Perth; Australia; December 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
ACCV | ||
Notes | DAG; 600.097; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DDG2018a | Serial | 3151 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; F. Javier Sanchez; Fernando Vilariño | ||||
Title | Integration of Valley Orientation Distribution for Polyp Region Identification in Colonoscopy | Type | Conference Article | ||
Year | 2011 | Publication | In MICCAI 2011 Workshop on Computational and Clinical Applications in Abdominal Imaging | Abbreviated Journal | |
Volume | 6668 | Issue | Pages | 76-83 | |
Keywords | |||||
Abstract | This work presents a region descriptor based on the integration of the information that the depth of valleys image provides. The depth of valleys image is based on the presence of intensity valleys around polyps due to the image acquisition. Our proposed method consists of defining, for each point, a series of radial sectors around it and then accumulates the maxima of the depth of valleys image only if the orientation of the intensity valley coincides with the orientation of the sector above. We apply our descriptor to a prior segmentation of the images and we present promising results on polyp detection, outperforming other approaches that also integrate depth of valleys information. | ||||
Address | Toronto, Canada | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Link | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Lecture Notes in Computer Science | Abbreviated Series Title | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference ![]() |
ABI | |
Notes | MV;SIAI | Approved | no | ||
Call Number | IAM @ iam @ BSV2011d | Serial | 1698 | ||
Permanent link to this record | |||||
Author | Sergio Vera; Debora Gil; Agnes Borras; F. Javier Sanchez; Frederic Perez; Marius G. Linguraru; Miguel Angel Gonzalez Ballester | ||||
Title | Computation and Evaluation of Medial Surfaces for Shape Representation of Abdominal Organs | Type | Book Chapter | ||
Year | 2012 | Publication | Workshop on Computational and Clinical Applications in Abdominal Imaging | Abbreviated Journal | |
Volume | 7029 | Issue | Pages | 223–230 | |
Keywords | medial manifolds, abdomen. | ||||
Abstract | Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D
objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs, exploring the use of medial manifolds for the representation of multi-organ relations. |
||||
Address | Toronto; Canada; | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Link | Place of Publication | Berlin | Editor | H. Yoshida et al |
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Lecture Notes in Computer Science | Abbreviated Series Title | LNCS | |
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-28556-1 | Medium | |
Area | Expedition | Conference ![]() |
ABDI | ||
Notes | IAM;MV | Approved | no | ||
Call Number | IAM @ iam @ VGB2012 | Serial | 1834 | ||
Permanent link to this record | |||||
Author | Sergio Vera; Debora Gil; Agnes Borras; F. Javier Sanchez; Frederic Perez; Marius G. Linguraru | ||||
Title | Computation and Evaluation of Medial Surfaces for Shape Representation of Abdominal Organs | Type | Conference Article | ||
Year | 2011 | Publication | Workshop on Computational and Clinical Applications in Abdominal Imaging | Abbreviated Journal | |
Volume | 7029 | Issue | Pages | 223-230 | |
Keywords | |||||
Abstract | Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs, exploring the use of medial manifolds for the representation of multi-organ relations. | ||||
Address | Nice, France | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | In H. Yoshida et al | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
ABDI | ||
Notes | IAM; MV | Approved | no | ||
Call Number | VGB2011 | Serial | 2036 | ||
Permanent link to this record | |||||
Author | Xinhang Song; Luis Herranz; Shuqiang Jiang | ||||
Title | Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs | Type | Conference Article | ||
Year | 2017 | Publication | 31st AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | RGB-D scene recognition; weakly supervised; fine tune; CNN | ||||
Abstract | Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data. | ||||
Address | San Francisco CA; February 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
AAAI | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ SHJ2017 | Serial | 2967 | ||
Permanent link to this record | |||||
Author | Mohamed Ali Souibgui; Sanket Biswas; Andres Mafla; Ali Furkan Biten; Alicia Fornes; Yousri Kessentini; Josep Llados; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | Text-DIAE: a self-supervised degradation invariant autoencoder for text recognition and document enhancement | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 37th AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | 37 | Issue | 2 | Pages | |
Keywords | Representation Learning for Vision; CV Applications; CV Language and Vision; ML Unsupervised; Self-Supervised Learning | ||||
Abstract | In this paper, we propose a Text-Degradation Invariant Auto Encoder (Text-DIAE), a self-supervised model designed to tackle two tasks, text recognition (handwritten or scene-text) and document image enhancement. We start by employing a transformer-based architecture that incorporates three pretext tasks as learning objectives to be optimized during pre-training without the usage of labelled data. Each of the pretext objectives is specifically tailored for the final downstream tasks. We conduct several ablation experiments that confirm the design choice of the selected pretext tasks. Importantly, the proposed model does not exhibit limitations of previous state-of-the-art methods based on contrastive losses, while at the same time requiring substantially fewer data samples to converge. Finally, we demonstrate that our method surpasses the state-of-the-art in existing supervised and self-supervised settings in handwritten and scene text recognition and document image enhancement. Our code and trained models will be made publicly available at https://github.com/dali92002/SSL-OCR | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
AAAI | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ SBM2023 | Serial | 3848 | ||
Permanent link to this record | |||||
Author | Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 37th AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | 37 | Issue | 2 | Pages | 1940-1948 |
Keywords | |||||
Abstract | Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model. | ||||
Address | Washington; USA; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
AAAI | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ NBM2023 | Serial | 3860 | ||
Permanent link to this record | |||||
Author | Marcos V Conde; Javier Vazquez; Michael S Brown; Radu TImofte | ||||
Title | NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement | Type | Conference Article | ||
Year | 2024 | Publication | 38th AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | 3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
AAAI | ||
Notes | CIC; MACO | Approved | no | ||
Call Number | Admin @ si @ CVB2024 | Serial | 3872 | ||
Permanent link to this record | |||||
Author | Pau Baiget; Joan Soto; Xavier Roca; Jordi Gonzalez | ||||
Title | Automatic Generation of Computer-Animated Sequences based on Human Behaviour Modelling | Type | Conference Article | ||
Year | 2007 | Publication | 10th International Conference on Computer Graphics and Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Athens (Greece) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference ![]() |
3IA | ||
Notes | ISE | Approved | no | ||
Call Number | ISE @ ise @ BSR2007 | Serial | 808 | ||
Permanent link to this record |