Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Ilke Demir; Dena Bazazian; Adriana Romero; Viktoriia Sharmanska; Lyne P. Tchapmi | ||||
Title | WiCV 2018: The Fourth Women In Computer Vision Workshop | Type | Conference Article | ||
Year | 2018 | Publication | 4th Women in Computer Vision Workshop | Abbreviated Journal | |
Volume | Issue | Pages | 1941-19412 | ||
Keywords | Conferences; Computer vision; Industries; Object recognition; Engineering profession; Collaboration; Machine learning | ||||
Abstract | We present WiCV 2018 – Women in Computer Vision Workshop to increase the visibility and inclusion of women researchers in computer vision field, organized in conjunction with CVPR 2018. Computer vision and machine learning have made incredible progress over the past years, yet the number of female researchers is still low both in academia and industry. WiCV is organized to raise visibility of female researchers, to increase the collaboration,
and to provide mentorship and give opportunities to femaleidentifying junior researchers in the field. In its fourth year, we are proud to present the changes and improvements over the past years, summary of statistics for presenters and attendees, followed by expectations from future generations. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WiCV | ||
Notes | DAG; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ DBR2018 | Serial | 3222 | ||
Permanent link to this record | |||||
Author | Arnau Baro; Pau Riba; Alicia Fornes | ||||
Title | A Starting Point for Handwritten Music Recognition | Type | Conference Article | ||
Year | 2018 | Publication | 1st International Workshop on Reading Music Systems | Abbreviated Journal | |
Volume | Issue | Pages | 5-6 | ||
Keywords | Optical Music Recognition; Long Short-Term Memory; Convolutional Neural Networks; MUSCIMA++; CVCMUSCIMA | ||||
Abstract | In the last years, the interest in Optical Music Recognition (OMR) has reawakened, especially since the appearance of deep learning. However, there are very few works addressing handwritten scores. In this work we describe a full OMR pipeline for handwritten music scores by using Convolutional and Recurrent Neural Networks that could serve as a baseline for the research community. | ||||
Address | Paris; France; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WORMS | ||
Notes | DAG; 600.097; 601.302; 601.330; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BRF2018 | Serial | 3223 | ||
Permanent link to this record | |||||
Author | Laura Lopez-Fuentes; Alessandro Farasin; Harald Skinnemoen; Paolo Garza | ||||
Title | Deep Learning models for passability detection of flooded roads | Type | Conference Article | ||
Year | 2018 | Publication | MediaEval 2018 Multimedia Benchmark Workshop | Abbreviated Journal | |
Volume | 2283 | Issue | Pages | ||
Keywords | |||||
Abstract | In this paper we study and compare several approaches to detect floods and evidence for passability of roads by conventional means in Twitter. We focus on tweets containing both visual information (a picture shared by the user) and metadata, a combination of text and related extra information intrinsic to the Twitter API. This work has been done in the context of the MediaEval 2018 Multimedia Satellite Task. | ||||
Address | Sophia Antipolis; France; October 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes | LAMP; 600.084; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LFS2018 | Serial | 3224 | ||
Permanent link to this record | |||||
Author | Anjan Dutta; Hichem Sahbi | ||||
Title | Stochastic Graphlet Embedding | Type | Journal Article | ||
Year | 2018 | Publication | IEEE Transactions on Neural Networks and Learning Systems | Abbreviated Journal | TNNLS |
Volume | Issue | Pages | 1-14 | ||
Keywords | Stochastic graphlets; Graph embedding; Graph classification; Graph hashing; Betweenness centrality | ||||
Abstract | Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semi-structured data as graphs where nodes correspond to primitives (parts, interest points, segments,
etc.) and edges characterize the relationships between these primitives. However, these non-vectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of – explicit/implicit –graph vectorization and embedding. This embedding process should be resilient to intra-class graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding (SGE) that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 602.167; 602.168; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DuS2018 | Serial | 3225 | ||
Permanent link to this record | |||||
Author | Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari | ||||
Title | Objects as context for detecting their semantic parts | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 6907 - 6916 | ||
Keywords | Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection | ||||
Abstract | We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GMF2018 | Serial | 3229 | ||
Permanent link to this record | |||||
Author | Antonio Lopez | ||||
Title | Pedestrian Detection Systems | Type | Book Chapter | ||
Year | 2018 | Publication | Wiley Encyclopedia of Electrical and Electronics Engineering | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Pedestrian detection is a highly relevant topic for both advanced driver assistance systems (ADAS) and autonomous driving. In this entry, we review the ideas behind pedestrian detection systems from the point of view of perception based on computer vision and machine learning. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ Lop2018 | Serial | 3230 | ||
Permanent link to this record | |||||
Author | Gemma Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo | ||||
Title | 2D-to-3D Facial Expression Transfer | Type | Conference Article | ||
Year | 2018 | Publication | 24th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 2008 - 2013 | ||
Keywords | |||||
Abstract | Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MSIAU; 600.086; 600.130; 600.118 | Approved | no | ||
Call Number | Admin @ si @ RLM2018 | Serial | 3232 | ||
Permanent link to this record | |||||
Author | Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Antonio Moreno; Petia Radeva; Domenec Puig | ||||
Title | CuisineNet: Food Attributes Classification using Multi-scale Convolution Network. | Type | Miscellaneous | ||
Year | 2018 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ KJR2018 | Serial | 3235 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Beatriz Remeseiro; Marc Bolaños; Petia Radeva | ||||
Title | Grab, Pay, and Eat: Semantic Food Detection for Smart Restaurants | Type | Journal Article | ||
Year | 2018 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | |
Volume | 20 | Issue | 12 | Pages | 3266 - 3275 |
Keywords | |||||
Abstract | The increase in awareness of people towards their nutritional habits has drawn considerable attention to the field of automatic food analysis. Focusing on self-service restaurants environment, automatic food analysis is not only useful for extracting nutritional information from foods selected by customers, it is also of high interest to speed up the service solving the bottleneck produced at the cashiers in times of high demand. In this paper, we address the problem of automatic food tray analysis in canteens and restaurants environment, which consists in predicting multiple foods placed on a tray image. We propose a new approach for food analysis based on convolutional neural networks, we name Semantic Food Detection, which integrates in the same framework food localization, recognition and segmentation. We demonstrate that our method improves the state of the art food detection by a considerable margin on the public dataset UNIMIB2016 achieving about 90% in terms of F-measure, and thus provides a significant technological advance towards the automatic billing in restaurant environments. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ ARB2018 | Serial | 3236 | ||
Permanent link to this record | |||||
Author | Simone Balocco; Mauricio Gonzalez; Ricardo Ñancule; Petia Radeva; Gabriel Thomas | ||||
Title | Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets | Type | Conference Article | ||
Year | 2018 | Publication | International Workshop on Artificial Intelligence and Pattern Recognition | Abbreviated Journal | |
Volume | 11047 | Issue | Pages | 34-42 | |
Keywords | Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis | ||||
Abstract | The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall. | ||||
Address | Cuba; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IWAIPR | ||
Notes | MILAB; no menciona | Approved | no | ||
Call Number | Admin @ si @ BGÑ2018 | Serial | 3237 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Lluis Gomez | ||||
Title | Avances en clasificación de imágenes en los últimos diez años. Perspectivas y limitaciones en el ámbito de archivos fotográficos históricos | Type | Journal | ||
Year | 2018 | Publication | Revista anual de la Asociación de Archiveros de Castilla y León | Abbreviated Journal | |
Volume | 21 | Issue | Pages | 161-174 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ RuG2018 | Serial | 3239 | ||
Permanent link to this record | |||||
Author | Ozan Caglayan; Adrien Bardet; Fethi Bougares; Loic Barrault; Kai Wang; Marc Masana; Luis Herranz; Joost Van de Weijer | ||||
Title | LIUM-CVC Submissions for WMT18 Multimodal Translation Task | Type | Conference Article | ||
Year | 2018 | Publication | 3rd Conference on Machine Translation | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation. This year we propose several modifications to our previou multimodal attention architecture in order to better integrate convolutional features and refine them using encoder-side information. Our final constrained submissions
ranked first for English→French and second for English→German language pairs among the constrained submissions according to the automatic evaluation metric METEOR. |
||||
Address | Brussels; Belgium; October 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WMT | ||
Notes | LAMP; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ CBB2018 | Serial | 3240 | ||
Permanent link to this record | |||||
Author | Aymen Azaza; Joost Van de Weijer; Ali Douik; Marc Masana | ||||
Title | Context Proposals for Saliency Detection | Type | Journal Article | ||
Year | 2018 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 174 | Issue | Pages | 1-11 | |
Keywords | |||||
Abstract | One of the fundamental properties of a salient object region is its contrast
with the immediate context. The problem is that numerous object regions exist which potentially can all be salient. One way to prevent an exhaustive search over all object regions is by using object proposal algorithms. These return a limited set of regions which are most likely to contain an object. Several saliency estimation methods have used object proposals. However, they focus on the saliency of the proposal only, and the importance of its immediate context has not been evaluated. In this paper, we aim to improve salient object detection. Therefore, we extend object proposal methods with context proposals, which allow to incorporate the immediate context in the saliency computation. We propose several saliency features which are computed from the context proposals. In the experiments, we evaluate five object proposal methods for the task of saliency segmentation, and find that Multiscale Combinatorial Grouping outperforms the others. Furthermore, experiments show that the proposed context features improve performance, and that our method matches results on the FT datasets and obtains competitive results on three other datasets (PASCAL-S, MSRA-B and ECSSD). |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.109; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ AWD2018 | Serial | 3241 | ||
Permanent link to this record | |||||
Author | Lu Yu; Yongmei Cheng; Joost Van de Weijer | ||||
Title | Weakly Supervised Domain-Specific Color Naming Based on Attention | Type | Conference Article | ||
Year | 2018 | Publication | 24th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3019 - 3024 | ||
Keywords | |||||
Abstract | The majority of existing color naming methods focuses on the eleven basic color terms of the English language. However, in many applications, different sets of color names are used for the accurate description of objects. Labeling data to learn these domain-specific color names is an expensive and laborious task. Therefore, in this article we aim to learn color names from weakly labeled data. For this purpose, we add an attention branch to the color naming network. The attention branch is used to modulate the pixel-wise color naming predictions of the network. In experiments, we illustrate that the attention branch correctly identifies the relevant regions. Furthermore, we show that our method obtains state-of-the-art results for pixel-wise and image-wise classification on the EBAY dataset and is able to learn color names for various domains. | ||||
Address | Beijing; August 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | LAMP; 600.109; 602.200; 600.120 | Approved | no | ||
Call Number | Admin @ si @ YCW2018 | Serial | 3243 | ||
Permanent link to this record | |||||
Author | Ana Maria Ares; Jorge Bernal; Maria Jesus Nozal; F. Javier Sanchez; Jose Bernal | ||||
Title | Results of the use of Kahoot! gamification tool in a course of Chemistry | Type | Conference Article | ||
Year | 2018 | Publication | 4th International Conference on Higher Education Advances | Abbreviated Journal | |
Volume | Issue | Pages | 1215-1222 | ||
Keywords | |||||
Abstract | The present study examines the use of Kahoot! as a gamification tool to explore mixed learning strategies. We analyze its use in two different groups of a theoretical subject of the third course of the Degree in Chemistry. An empirical-analytical methodology was used using Kahoot! in two different groups of students, with different frequencies. The academic results of these two group of students were compared between them and with those obtained in the previous course, in which Kahoot! was not employed, with the aim of measuring the evolution in the students´ knowledge. The results showed, in all cases, that the use of Kahoot! has led to a significant increase in the overall marks, and in the number of students who passed the subject. Moreover, some differences were also observed in students´ academic performance according to the group. Finally, it can be concluded that the use of a gamification tool (Kahoot!) in a university classroom had generally improved students´ learning and marks, and that this improvement is more prevalent in those students who have achieved a better Kahoot! performance. | ||||
Address | Valencia; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HEAD | ||
Notes | MV; no proj | Approved | no | ||
Call Number | Admin @ si @ ABN2018 | Serial | 3246 | ||
Permanent link to this record |