|
Carles Sanchez, Debora Gil, T. Gache, N. Koufos, Marta Diez-Ferrer, & Antoni Rosell. (2016). SENSA: a System for Endoscopic Stenosis Assessment. In 28th Conference of the international Society for Medical Innovation and Technology.
Abstract: Documenting the severity of a static or dynamic Central Airway Obstruction (CAO) is crucial to establish proper diagnosis and treatment, predict possible treatment effects and better follow-up the patients. The subjective visual evaluation of a stenosis during video-bronchoscopy still remains the most common way to assess a CAO in spite of a consensus among experts for a need to standardize all calculations [1].
The Computer Vision Center in cooperation with the «Hospital de Bellvitge», has developed a System for Endoscopic Stenosis Assessment (SENSA), which computes CAO directly by analyzing standard bronchoscopic data without the need of using other imaging tecnologies.
|
|
|
Ernest Valveny, & Miquel Ferrer. (2008). Application of Graph Embedding to Solve Graph Matchin Problems. In Colloque International Francophone sur l’Ecrit et le Document (13–18).
|
|
|
H. Chouaib, Salvatore Tabbone, Oriol Ramos Terrades, F. Cloppet, N. Vincent, & A.T. Thierry Paquet. (2008). Sélection de Caractéristiques à partir d'un algorithme génétique et d'une combinaison de classifieurs Adaboost. In Colloque International Francophone sur l'Ecrit et le Document (pp. 181–186).
|
|
|
T.O. Nguyen, Salvatore Tabbone, Oriol Ramos Terrades, & A.T. Thierry. (2008). Proposition d'un descripteur de formes et du modèle vectoriel pour la recherche de symboles. In Colloque International Francophone sur l'Ecrit et le Document (pp. 79–84).
|
|
|
Clement Guerin, Christophe Rigaud, Karell Bertet, Jean-Christophe Burie, Arnaud Revel, & Jean-Marc Ogier. (2014). Réduction de l’espace de recherche pour les personnages de bandes dessinées. In 19th National Congress Reconnaissance de Formes et l'Intelligence Artificielle.
Abstract: Les bandes dessinées représentent un patrimoine culturel important dans de nombreux pays et leur numérisation massive offre la possibilité d'effectuer des recherches dans le contenu des images. À ce jour, ce sont principalement les structures des pages et leurs contenus textuels qui ont été étudiés, peu de travaux portent sur le contenu graphique. Nous proposons de nous appuyer sur des éléments déjà étudiés tels que la position des cases et des bulles, pour réduire l'espace de recherche et localiser les personnages en fonction de la queue des bulles. L'évaluation de nos différentes contributions à partir de la base eBDtheque montre un taux de détection des queues de bulle de 81.2%, de localisation des personnages allant jusqu'à 85% et un gain d'espace de recherche de plus de 50%.
Keywords: contextual search; document analysis; comics characters
|
|
|
Albert Berenguel, Oriol Ramos Terrades, Josep Llados, & Cristina Cañero. (2016). Banknote counterfeit detection through background texture printing analysis. In 12th IAPR Workshop on Document Analysis Systems.
Abstract: This paper is focused on the detection of counterfeit photocopy banknotes. The main difficulty is to work on a real industrial scenario without any constraint about the acquisition device and with a single image. The main contributions of this paper are twofold: first the adaptation and performance evaluation of existing approaches to classify the genuine and photocopy banknotes using background texture printing analysis, which have not been applied into this context before. Second, a new dataset of Euro banknotes images acquired with several cameras under different luminance conditions to evaluate these methods. Experiments on the proposed algorithms show that mixing SIFT features and sparse coding dictionaries achieves quasi perfect classification using a linear SVM with the created dataset. Approaches using dictionaries to cover all possible texture variations have demonstrated to be robust and outperform the state-of-the-art methods using the proposed benchmark.
|
|
|
Joost Van de Weijer, & Fahad Shahbaz Khan. (2015). An Overview of Color Name Applications in Computer Vision. In Computational Color Imaging Workshop.
Abstract: In this article we provide an overview of color name applications in computer vision. Color names are linguistic labels which humans use to communicate color. Computational color naming learns a mapping from pixels values to color names. In recent years color names have been applied to a wide variety of computer vision applications, including image classification, object recognition, texture classification, visual tracking and action recognition. Here we provide an overview of these results which show that in general color names outperform photometric invariants as a color representation.
Keywords: color features; color names; object recognition
|
|
|
Xavier Otazu, & Maria Vanrell. (2006). Several lightness induction effects with a computational multiresolution wavelet framework. 29th European Conference on Visual Perception (ECVP’06), Perception Suppl s, 32: 56–56.
|
|
|
V. Kober, Mikhail Mozerov, J. Alvarez-Borrego, & I.A. Ovseyevich. (2006). Pattern Recognition of Fragmented Objects with Adaptive Correlation Filters.
|
|
|
Mikhail Mozerov, V. Kober, & I.A. Ovseyevich. (2006). A Stereo Matching Algorithm with Global Smoothness Criterion.
|
|
|
Eduardo Aguilar, & Petia Radeva. (2019). Class-Conditional Data Augmentation Applied to Image Classification. In 18th International Conference on Computer Analysis of Images and Patterns (Vol. 11679, pp. 182–192). LNCS.
Abstract: Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty.
Keywords: CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition
|
|
|
Estefania Talavera, Nicolai Petkov, & Petia Radeva. (2019). Unsupervised Routine Discovery in Egocentric Photo-Streams. In 18th International Conference on Computer Analysis of Images and Patterns (Vol. 11678, pp. 576–588). LNCS.
Abstract: The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people.
Keywords: Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis
|
|
|
Joan Arnedo-Moreno, D. Bañeres, Xavier Baro, S. Caballe, S. Guerrero, L. Porta, et al. (2014). Va-ID: A trust-based virtual assessment system. In 6th International Conference on Intelligent Networking and Collaborative Systems (pp. 328–335).
Abstract: Even though online education is a very important pillar of lifelong education, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity and authorship and the integrity of the activities in a scalable and cost efficient manner would be very helpful. This paper presents ValID, a virtual assessment approach based on a continuous trust level evaluation between students and the institution. The current trust level serves as the main mechanism to dynamically decide which kind of controls a given student should be subjected to, across different courses in a degree. The main goal is providing a fair trade-off between security, scalability and cost, while maintaining the perceived quality of the educational model.
|
|
|
Sergio Escalera, Alicia Fornes, Oriol Pujol, & Petia Radeva. (2009). Multi-class Binary Symbol Classification with Circular Blurred Shape Models. In 15th International Conference on Image Analysis and Processing (Vol. 5716, 1005–1014). LNCS. Springer Berlin Heidelberg.
Abstract: Multi-class binary symbol classification requires the use of rich descriptors and robust classifiers. Shape representation is a difficult task because of several symbol distortions, such as occlusions, elastic deformations, gaps or noise. In this paper, we present the Circular Blurred Shape Model descriptor. This descriptor encodes the arrangement information of object parts in a correlogram structure. A prior blurring degree defines the level of distortion allowed to the symbol. Moreover, we learn the new feature space using a set of Adaboost classifiers, which are combined in the Error-Correcting Output Codes framework to deal with the multi-class categorization problem. The presented work has been validated over different multi-class data sets, and compared to the state-of-the-art descriptors, showing significant performance improvements.
|
|
|
Bojana Gajic, & Ramon Baldrich. (2018). Cross-domain fashion image retrieval. In CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) (pp. 19500–19502).
Abstract: Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task.
|
|