toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Carles Sanchez; Miguel Viñas; Coen Antens; Agnes Borras; Debora Gil edit   pdf
url  doi
openurl 
  Title Back to Front Architecture for Diagnosis as a Service Type Conference Article
  Year 2018 Publication 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing Abbreviated Journal  
  Volume (down) Issue Pages 343-346  
  Keywords  
  Abstract Software as a Service (SaaS) is a cloud computing model in which a provider hosts applications in a server that customers use via internet. Since SaaS does not require to install applications on customers' own computers, it allows the use by multiple users of highly specialized software without extra expenses for hardware acquisition or licensing. A SaaS tailored for clinical needs not only would alleviate licensing costs, but also would facilitate easy access to new methods for diagnosis assistance. This paper presents a SaaS client-server architecture for Diagnosis as a Service (DaaS). The server is based on docker technology in order to allow execution of softwares implemented in different languages with the highest portability and scalability. The client is a content management system allowing the design of websites with multimedia content and interactive visualization of results allowing user editing. We explain a usage case that uses our DaaS as crowdsourcing platform in a multicentric pilot study carried out to evaluate the clinical benefits of a software for assessment of central airway obstruction.  
  Address Timisoara; Rumania; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SYNASC  
  Notes IAM; 600.145 Approved no  
  Call Number Admin @ si @ SVA2018 Serial 3360  
Permanent link to this record
 

 
Author Hugo Jair Escalante; Sergio Escalera; Isabelle Guyon; Xavier Baro; Yagmur Gucluturk; Umut Guçlu; Marcel van Gerven edit  url
doi  openurl
  Title Explainable and Interpretable Models in Computer Vision and Machine Learning Type Book Whole
  Year 2018 Publication The Springer Series on Challenges in Machine Learning Abbreviated Journal  
  Volume (down) Issue Pages  
  Keywords  
  Abstract This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning.
Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision.
This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following:

·Evaluation and Generalization in Interpretable Machine Learning
·Explanation Methods in Deep Learning
·Learning Functional Causal Models with Generative Neural Networks
·Learning Interpreatable Rules for Multi-Label Classification
·Structuring Neural Networks for More Explainable Predictions
·Generating Post Hoc Rationales of Deep Visual Classification Decisions
·Ensembling Visual Explanations
·Explainable Deep Driving by Visualizing Causal Attention
·Interdisciplinary Perspective on Algorithmic Job Candidate Search
·Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions
·Inherent Explainability Pattern Theory-based Video Event Interpretations
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no menciona Approved no  
  Call Number Admin @ si @ EEG2018 Serial 3399  
Permanent link to this record
 

 
Author Guillem Cucurull; Pau Rodriguez; Vacit Oguz Yazici; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez edit  openurl
  Title Deep Inference of Personality Traits by Integrating Image and Word Use in Social Networks Type Miscellaneous
  Year 2018 Publication Arxiv Abbreviated Journal  
  Volume (down) Issue Pages  
  Keywords  
  Abstract arXiv:1802.06757
Social media, as a major platform for communication and information exchange, is a rich repository of the opinions and sentiments of 2.3 billion users about a vast spectrum of topics. To sense the whys of certain social user’s demands and cultural-driven interests, however, the knowledge embedded in the 1.8 billion pictures which are uploaded daily in public profiles has just started to be exploited since this process has been typically been text-based. Following this trend on visual-based social analysis, we present a novel methodology based on Deep Learning to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits. So the key contribution here is to explore whether OCEAN personality trait modeling can be addressed based on images, here called MindPics, appearing with certain tags with psychological insights. We found that there is a correlation between those posted images and their accompanying texts, which can be successfully modeled using deep neural networks for personality estimation. The experimental results are consistent with previous cyber-psychology results based on texts or images.
In addition, classification results on some traits show that some patterns emerge in the set of images corresponding to a specific text, in essence to those representing an abstract concept. These results open new avenues of research for further refining the proposed personality model under the supervision of psychology experts.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ CRY2018 Serial 3550  
Permanent link to this record
 

 
Author Bojana Gajic; Ramon Baldrich edit  doi
openurl 
  Title Cross-domain fashion image retrieval Type Conference Article
  Year 2018 Publication CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) Abbreviated Journal  
  Volume (down) Issue Pages 19500-19502  
  Keywords  
  Abstract Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task.
 
  Address Salt Lake City, USA; 22 June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes CIC; 600.087 Approved no  
  Call Number Admin @ si @ Serial 3709  
Permanent link to this record
 

 
Author Jon Almazan; Bojana Gajic; Naila Murray; Diane Larlus edit  doi
openurl 
  Title Re-ID done right: towards good practices for person re-identification Type Miscellaneous
  Year 2018 Publication Arxiv Abbreviated Journal  
  Volume (down) Issue Pages  
  Keywords  
  Abstract Training a deep architecture using a ranking loss has become standard for the person re-identification task. Increasingly, these deep architectures include additional components that leverage part detections, attribute predictions, pose estimators and other auxiliary information, in order to more effectively localize and align discriminative image regions. In this paper we adopt a different approach and carefully design each component of a simple deep architecture and, critically, the strategy for training it effectively for person re-identification. We extensively evaluate each design choice, leading to a list of good practices for person re-identification. By following these practices, our approach outperforms the state of the art, including more complex methods with auxiliary components, by large margins on four benchmark datasets. We also provide a qualitative analysis of our trained representation which indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism.  
  Address January 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ Serial 3711  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: