Home | [41–50] << 51 52 53 54 55 56 57 58 59 60 >> [61–70] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Eric Amiel | ||||
Title | Visualisation de vaisseaux sanguins | Type | Report | ||
Year | 2005 | Publication | Rapport de Stage | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Université Paul Sabatier Toulouse III | Thesis | Bachelor's thesis | ||
Publisher | Université Paul Sabatier Toulouse III | Place of Publication | Toulouse | Editor | Enric Marti |
Language | French | Summary Language | French | Original Title | |
Series Editor | IUP Systèmes Intelligents | Series Title | Abbreviated Series Title | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM | Approved | no | ||
Call Number | IAM @ iam @ Ami2005 | Serial | 1690 | ||
Permanent link to this record | |||||
Author | Ishaan Gulrajani; Kundan Kumar; Faruk Ahmed; Adrien Ali Taiga; Francesco Visin; David Vazquez; Aaron Courville | ||||
Title | PixelVAE: A Latent Variable Model for Natural Images | Type | Conference Article | ||
Year | 2017 | Publication | 5th International Conference on Learning Representations | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Deep Learning; Unsupervised Learning | ||||
Abstract | Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and generate samples that preserve global structure but tend to suffer from image blurriness. PixelCNNs model sharp contours and details very well, but lack an explicit latent representation and have difficulty modeling large-scale structure in a computationally efficient way. In this paper, we present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. The resulting architecture achieves state-of-the-art log-likelihood on binarized MNIST. We extend PixelVAE to a hierarchy of multiple latent variables at different scales; this hierarchical model achieves competitive likelihood on 64x64 ImageNet and generates high-quality samples on LSUN bedrooms. | ||||
Address | Toulon; France; April 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICLR | ||
Notes | ADAS; 600.085; 600.076; 601.281; 600.118 | Approved | no | ||
Call Number | ADAS @ adas @ GKA2017 | Serial | 2815 | ||
Permanent link to this record | |||||
Author | Antoni Gurgui; Debora Gil; Enric Marti | ||||
Title | Laplacian Unitary Domain for Texture Morphing | Type | Conference Article | ||
Year | 2015 | Publication | Proceedings of the 10th International Conference on Computer Vision Theory and Applications VISIGRAPP2015 | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 693-699 | |
Keywords | Facial; metamorphosis;LaplacianMorphing | ||||
Abstract | Deformation of expressive textures is the gateway to realistic computer synthesis of expressions. By their good mathematical properties and flexible formulation on irregular meshes, most texture mappings rely on solutions to the Laplacian in the cartesian space. In the context of facial expression morphing, this approximation can be seen from the opposite point of view by neglecting the metric. In this paper, we use the properties of the Laplacian in manifolds to present a novel approach to warping expressive facial images in order to generate a morphing between them. | ||||
Address | Munich; Germany; February 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | SciTePress | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-758-089-5 | Medium | ||
Area | Expedition | Conference | VISAPP | ||
Notes | IAM; 600.075 | Approved | no | ||
Call Number | Admin @ si @ GGM2015 | Serial | 2614 | ||
Permanent link to this record | |||||
Author | Xavier Otazu; Olivier Penacchio; Xim Cerda-Company | ||||
Title | Brightness and colour induction through contextual influences in V1 | Type | Conference Article | ||
Year | 2015 | Publication | Scottish Vision Group 2015 SGV2015 | Abbreviated Journal | |
Volume | 12 | Issue | 9 | Pages | 1208-2012 |
Keywords | |||||
Abstract | |||||
Address | Carnoustie; Scotland; March 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SGV | ||
Notes | NEUROBIT; | Approved | no | ||
Call Number | Admin @ si @ OPC2015a | Serial | 2632 | ||
Permanent link to this record | |||||
Author | Antonio Lopez | ||||
Title | Ridge/Valley-like structures: Creases, separatrices and drainage patterns | Type | Miscellaneous | ||
Year | 1997 | Publication | Computer vision on–line | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | CVC | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ Lop1997 | Serial | 488 | ||
Permanent link to this record | |||||
Author | Veronica Romero; Emilio Granell; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez | ||||
Title | Information Extraction in Handwritten Marriage Licenses Books | Type | Conference Article | ||
Year | 2019 | Publication | 5th International Workshop on Historical Document Imaging and Processing | Abbreviated Journal | |
Volume | Issue | Pages | 66-71 | ||
Keywords | |||||
Abstract | Handwritten marriage licenses books are characterized by a simple structure of the text in the records with an evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. Previous works have shown that the use of category-based language models and a Grammatical Inference technique known as MGGI can improve the accuracy of these
tasks. However, the application of the MGGI algorithm requires an a priori knowledge to label the words of the training strings, that is not always easy to obtain. In this paper we study how to automatically obtain the information required by the MGGI algorithm using a technique based on Confusion Networks. Using the resulting language model, full handwritten text recognition and information extraction experiments have been carried out with results supporting the proposed approach. |
||||
Address | Sydney; Australia; September 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HIP | ||
Notes | DAG; 600.140; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RGF2019 | Serial | 3352 | ||
Permanent link to this record | |||||
Author | Maya Dimitrova; Ch. Roumenin; Siya Lozanova; David Rotger; Petia Radeva | ||||
Title | An Interface System Based on Multimodal Principle for Cardiological Diagnosis Assistance | Type | Conference Article | ||
Year | 2007 | Publication | International Conference On Computer Systems And Technologies | Abbreviated Journal | |
Volume | IIIB.4 | Issue | Pages | 1–6 | |
Keywords | |||||
Abstract | |||||
Address | Bulgaria | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CompSysTech’07 | ||
Notes | MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ DRL2007 | Serial | 833 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Josep Llados | ||||
Title | Logo Spotting by a Bag-of-words Approach for Document Categorization | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 111–115 | ||
Keywords | |||||
Abstract | In this paper we present a method for document categorization which processes incoming document images such as invoices or receipts. The categorization of these document images is done in terms of the presence of a certain graphical logo detected without segmentation. The graphical logos are described by a set of local features and the categorization of the documents is performed by the use of a bag-of-words model. Spatial coherence rules are added to reinforce the correct category hypothesis, aiming also to spot the logo inside the document image. Experiments which demonstrate the effectiveness of this system on a large set of real data are presented. | ||||
Address | Barcelona; Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ RuL2009b | Serial | 1179 | ||
Permanent link to this record | |||||
Author | Jaume Amores; N. Sebe; Petia Radeva | ||||
Title | Fast Spatial Pattern Discovery Integrating Boosting with Constellations of Contextual Descriptors | Type | Miscellaneous | ||
Year | 2005 | Publication | IEEE Computer Society, International Conference on Computer Vision and Pattern Recognition (CVPR’05), 2(2):769–774 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | San Diego, CA (USA) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | ADAS @ adas @ ASR2005a | Serial | 541 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus | ||||
Title | Learning Color Names for Real-World Applications | Type | Journal Article | ||
Year | 2009 | Publication | IEEE Transaction in Image Processing | Abbreviated Journal | TIP |
Volume | 18 | Issue | 7 | Pages | 1512–1524 |
Keywords | |||||
Abstract | Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | CAT @ cat @ WSV2009 | Serial | 1195 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell | ||||
Title | Top-Down Color Attention for Object Recognition | Type | Conference Article | ||
Year | 2009 | Publication | 12th International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 979 - 986 | ||
Keywords | |||||
Abstract | Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. | ||||
Address | Kyoto, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | |
Area | Expedition | Conference | ICCV | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ SWV2009 | Serial | 1196 | ||
Permanent link to this record | |||||
Author | Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin | ||||
Title | Towards Automatic Concept Transfer | Type | Conference Article | ||
Year | 2011 | Publication | Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering | Abbreviated Journal | |
Volume | Issue | Pages | 167.176 | ||
Keywords | chromatic modeling, color concepts, color transfer, concept transfer | ||||
Abstract | This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | ACM Press | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-0907-3 | Medium | ||
Area | Expedition | Conference | NPAR | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ MSM2011 | Serial | 1866 | ||
Permanent link to this record | |||||
Author | Dawid Rymarczyk; Joost van de Weijer; Bartosz Zielinski; Bartlomiej Twardowski | ||||
Title | ICICLE: Interpretable Class Incremental Continual Learning. Dawid Rymarczyk | Type | Conference Article | ||
Year | 2023 | Publication | 20th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1887-1898 | ||
Keywords | |||||
Abstract | Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models. | ||||
Address | Paris; France; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ RWZ2023 | Serial | 3947 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Dario Carpio; Angel Sappa; Henry Velesaca | ||||
Title | Transformer based Image Dehazing | Type | Conference Article | ||
Year | 2022 | Publication | 16th IEEE International Conference on Signal Image Technology & Internet Based System | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | atmospheric light; brightness component; computational cost; dehazing quality; haze-free image | ||||
Abstract | This paper presents a novel approach to remove non homogeneous haze from real images. The proposed method consists mainly of image feature extraction, haze removal, and image reconstruction. To accomplish this challenging task, we propose an architecture based on transformers, which have been recently introduced and have shown great potential in different computer vision tasks. Our model is based on the SwinIR an image restoration architecture based on a transformer, but by modifying the deep feature extraction module, the depth level of the model, and by applying a combined loss function that improves styling and adapts the model for the non-homogeneous haze removal present in images. The obtained results prove to be superior to those obtained by state-of-the-art models. | ||||
Address | Dijon; France; October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SITIS | ||
Notes | MSIAU; no proj | Approved | no | ||
Call Number | Admin @ si @ SCS2022 | Serial | 3803 | ||
Permanent link to this record | |||||
Author | Jose Seabra; F. Javier Sanchez; Francesco Ciompi; Petia Radeva | ||||
Title | Ultrasonographic Plaque Characterization using a Rayleigh Mixture Model | Type | Conference Article | ||
Year | 2010 | Publication | 7th IEEE International Symposium on Biomedical Imaging | Abbreviated Journal | |
Volume | Issue | Pages | 1–4 | ||
Keywords | |||||
Abstract | From Nano to Macro
A correct modelling of tissue morphology is determinant for the identification of vulnerable plaques. This paper aims at describing the plaque composition by means of a Rayleigh Mixture Model applied to ultrasonic data. The effectiveness of using a mixture of distributions is established through synthetic and real ultrasonic data samples. Furthermore, the proposed mixture model is used in a plaque classification problem in Intravascular Ultrasound (IVUS) images of coronary plaques. A classifier tested on a set of 67 in-vitro plaques, yields an overall accuracy of 86% and sensitivity of 92%, 94% and 82%, for fibrotic, calcified and lipidic tissues, respectively. These results strongly suggest that different plaques types can be distinguished by means of the coefficients and Rayleigh parameters of the mixture distribution. |
||||
Address | Rotterdam (Netherlands) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1945-7928 | ISBN | 978-1-4244-4125-9 | Medium | |
Area | Expedition | Conference | ISBI | ||
Notes | MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ SSC2010 | Serial | 1366 | ||
Permanent link to this record |