|   | 
Details
   web
Records
Author Pierdomenico Fiadino; Victor Ponce; Juan Antonio Torrero-Gonzalez; Marc Torrent-Moreno
Title Call Detail Records for Human Mobility Studies: Taking Stock of the Situation in the “Always Connected Era" Type Conference Article
Year 2017 Publication Workshop on Big Data Analytics and Machine Learning for Data Communication Networks Abbreviated Journal
Volume Issue Pages 43-48
Keywords mobile networks; call detail records; human mobility
Abstract The exploitation of cellular network data for studying human mobility has been a popular research topic in the last decade. Indeed, mobile terminals could be considered ubiquitous sensors that allow the observation of human movements on large scale without the need of relying on non-scalable techniques, such as surveys, or dedicated and expensive monitoring infrastructures. In particular, Call Detail Records (CDRs), collected by operators for billing purposes,
have been extensively employed due to their rather large availability, compared to other types of cellular data (e.g., signaling). Despite the interest aroused around this topic, the research community has generally agreed about the scarcity of information provided by CDRs: the position of mobile terminals is logged when some kind of activity (calls, SMS, data connections) occurs, which translates in a picture of mobility somehow biased by the activity degree of users.
By studying two datasets collected by a Nation-wide operator in 2014 and 2016, we show that the situation has drastically changed in terms of data volume and quality. The increase of flat data plans and the higher penetration of “
always connected” terminals have driven up the number of recorded CDRs, providing higher temporal accuracy for users’ locations.
Address UCLA; USA; August 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-5054-9 Medium
Area Expedition Conference (down) ACMW (SIGCOMM)
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ FPT2017 Serial 2980
Permanent link to this record
 

 
Author Aniol Lidon; Marc Bolaños; Mariella Dimiccoli; Petia Radeva; Maite Garolera; Xavier Giro
Title Semantic Summarization of Egocentric Photo-Stream Events Type Conference Article
Year 2017 Publication 2nd Workshop on Lifelogging Tools and Applications Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address San Francisco; USA; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-5503-2 Medium
Area Expedition Conference (down) ACMW (LTA)
Notes MILAB; no proj Approved no
Call Number Admin @ si @ LBD2017 Serial 3024
Permanent link to this record
 

 
Author Silvio Giancola; Anthony Cioppa; Adrien Deliege; Floriane Magera; Vladimir Somers; Le Kang; Xin Zhou; Olivier Barnich; Christophe De Vleeschouwer; Alexandre Alahi; Bernard Ghanem; Marc Van Droogenbroeck; Abdulrahman Darwish; Adrien Maglo; Albert Clapes; Andreas Luyts; Andrei Boiarov; Artur Xarles; Astrid Orcesi; Avijit Shah; Baoyu Fan; Bharath Comandur; Chen Chen; Chen Zhang; Chen Zhao; Chengzhi Lin; Cheuk-Yiu Chan; Chun Chuen Hui; Dengjie Li; Fan Yang; Fan Liang; Fang Da; Feng Yan; Fufu Yu; Guanshuo Wang; H. Anthony Chan; He Zhu; Hongwei Kan; Jiaming Chu; Jianming Hu; Jianyang Gu; Jin Chen; Joao V. B. Soares; Jonas Theiner; Jorge De Corte; Jose Henrique Brito; Jun Zhang; Junjie Li; Junwei Liang; Leqi Shen; Lin Ma; Lingchi Chen; Miguel Santos Marques; Mike Azatov; Nikita Kasatkin; Ning Wang; Qiong Jia; Quoc Cuong Pham; Ralph Ewerth; Ran Song; Rengang Li; Rikke Gade; Ruben Debien; Runze Zhang; Sangrok Lee; Sergio Escalera; Shan Jiang; Shigeyuki Odashima; Shimin Chen; Shoichi Masui; Shouhong Ding; Sin-wai Chan; Siyu Chen; Tallal El-Shabrawy; Tao He; Thomas B. Moeslund; Wan-Chi Siu; Wei Zhang; Wei Li; Xiangwei Wang; Xiao Tan; Xiaochuan Li; Xiaolin Wei; Xiaoqing Ye; Xing Liu; Xinying Wang; Yandong Guo; Yaqian Zhao; Yi Yu; Yingying Li; Yue He; Yujie Zhong; Zhenhua Guo; Zhiheng Li
Title SoccerNet 2022 Challenges Results Type Conference Article
Year 2022 Publication 5th International ACM Workshop on Multimedia Content Analysis in Sports Abbreviated Journal
Volume Issue Pages 75-86
Keywords
Abstract The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on this https URL. Baselines and development kits are available on this https URL.
Address Lisboa; Portugal; October 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACMW
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ GCD2022 Serial 3801
Permanent link to this record
 

 
Author H. Emrah Tasli; Jan van Gemert; Theo Gevers
Title Spot the differences: from a photograph burst to the single best picture Type Conference Article
Year 2013 Publication 21ST ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 729-732
Keywords
Abstract With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool.
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM-MM
Notes ALTRES;ISE Approved no
Call Number TGG2013 Serial 2368
Permanent link to this record
 

 
Author Sezer Karaoglu; Jan van Gemert; Theo Gevers
Title Con-text: text detection using background connectivity for fine-grained object classification Type Conference Article
Year 2013 Publication 21ST ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 757-760
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM-MM
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ KGG2013 Serial 2369
Permanent link to this record
 

 
Author Yaxing Wang; Abel Gonzalez-Garcia; Joost Van de Weijer; Luis Herranz
Title SDIT: Scalable and Diverse Cross-domain Image Translation Type Conference Article
Year 2019 Publication 27th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 1267–1276
Keywords
Abstract Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces.
Address Nice; Francia; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM-MM
Notes LAMP; 600.106; 600.109; 600.141; 600.120 Approved no
Call Number Admin @ si @ WGW2019 Serial 3363
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades
Title Document noise removal using sparse representations over learned dictionary Type Conference Article
Year 2013 Publication Symposium on Document engineering Abbreviated Journal
Volume Issue Pages 161-168
Keywords
Abstract best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art.
Address Barcelona; October 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-1789-4 Medium
Area Expedition Conference (down) ACM-DocEng
Notes DAG; 600.061 Approved no
Call Number Admin @ si @ DTR2013a Serial 2330
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva
Title Active labeling application applied to food-related object recognition Type Conference Article
Year 2013 Publication 5th International Workshop on Multimedia for Cooking & Eating Activities Abbreviated Journal
Volume Issue Pages 45-50
Keywords
Abstract Every day, lifelogging devices, available for recording different aspects of our daily life, increase in number, quality and functions, just like the multiple applications that we give to them. Applying wearable devices to analyse the nutritional habits of people is a challenging application based on acquiring and analyzing life records in long periods of time. However, to extract the information of interest related to the eating patterns of people, we need automatic methods to process large amount of life-logging data (e.g. recognition of food-related objects). Creating a rich set of manually labeled samples to train the algorithms is slow, tedious and subjective. To address this problem, we propose a novel method in the framework of Active Labeling for construct- ing a training set of thousands of images. Inspired by the hierarchical sampling method for active learning [6], we propose an Active forest that organizes hierarchically the data for easy and fast labeling. Moreover, introducing a classifier into the hierarchical structures, as well as transforming the feature space for better data clustering, additionally im- prove the algorithm. Our method is successfully tested to label 89.700 food-related objects and achieves significant reduction in expert time labelling.

Active labeling application applied to food-related object recognition ResearchGate. Available from: http://www.researchgate.net/publication/262252017Activelabelingapplicationappliedtofood-relatedobjectrecognition [accessed Jul 14, 2015].
Address Barcelona; October 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM-CEA
Notes MILAB Approved no
Call Number Admin @ si @ BGR2013b Serial 2637
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan
Title Interactive layout analysis and transcription systems for historic handwritten documents Type Conference Article
Year 2010 Publication 10th ACM Symposium on Document Engineering Abbreviated Journal
Volume Issue Pages 219–222
Keywords Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis
Abstract The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process.
Address Manchester, United Kingdom
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM
Notes DAG Approved no
Call Number Admin @ si @RTS2010 Serial 1857
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen
Title Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition Type Conference Article
Year 2017 Publication 19th International Conference on Multimodal Interaction Abbreviated Journal
Volume Issue Pages
Keywords Convolutional Neural Networks; Texture Recognition; Local Binary Paterns
Abstract Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets.
Address Glasgow; Scothland; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM
Notes LAMP; 600.109; 600.068; 600.120 Approved no
Call Number Admin @ si @ RKW2017 Serial 3038
Permanent link to this record
 

 
Author Xinhang Song; Haitao Zeng; Sixian Zhang; Luis Herranz; Shuqiang Jiang
Title Generalized Zero-shot Learning with Multi-source Semantic Embeddings for Scene Recognition Type Conference Article
Year 2020 Publication 28th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Recognizing visual categories from semantic descriptions is a promising way to extend the capability of a visual classifier beyond the concepts represented in the training data (i.e. seen categories). This problem is addressed by (generalized) zero-shot learning methods (GZSL), which leverage semantic descriptions that connect them to seen categories (e.g. label embedding, attributes). Conventional GZSL are designed mostly for object recognition. In this paper we focus on zero-shot scene recognition, a more challenging setting with hundreds of categories where their differences can be subtle and often localized in certain objects or regions. Conventional GZSL representations are not rich enough to capture these local discriminative differences. Addressing these limitations, we propose a feature generation framework with two novel components: 1) multiple sources of semantic information (i.e. attributes, word embeddings and descriptions), 2) region descriptions that can enhance scene discrimination. To generate synthetic visual features we propose a two-step generative approach, where local descriptions are sampled and used as conditions to generate visual features. The generated features are then aggregated and used together with real features to train a joint classifier. In order to evaluate the proposed method, we introduce a new dataset for zero-shot scene recognition with multi-semantic annotations. Experimental results on the proposed dataset and SUN Attribute dataset illustrate the effectiveness of the proposed method.
Address Virtual; October 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM
Notes LAMP; 600.141; 600.120 Approved no
Call Number Admin @ si @ SZZ2020 Serial 3465
Permanent link to this record
 

 
Author Raul Gomez; Yahui Liu; Marco de Nadai; Dimosthenis Karatzas; Bruno Lepri; Nicu Sebe
Title Retrieval Guided Unsupervised Multi-domain Image to Image Translation Type Conference Article
Year 2020 Publication 28th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACM
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ GLN2020 Serial 3497
Permanent link to this record
 

 
Author Antonio Lopez; J. Hilgenstock; A. Busse; Ramon Baldrich; Felipe Lumbreras; Joan Serrat
Title Nightime Vehicle Detecion for Intelligent Headlight Control Type Conference Article
Year 2008 Publication Advanced Concepts for Intelligent Vision Systems, 10th International Conference, Proceedings, Abbreviated Journal
Volume 5259 Issue Pages 113–124
Keywords Intelligent Headlights; vehicle detection
Abstract
Address Juan-les-Pins, France
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference (down) ACIVS
Notes ADAS;CIC Approved no
Call Number ADAS @ adas @ LHB2008a Serial 1098
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa
Title A Novel Approach to Geometric Fitting of Implicit Quadrics Type Conference Article
Year 2009 Publication 8th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal
Volume 5807 Issue Pages 121–132
Keywords
Abstract This paper presents a novel approach for estimating the geometric distance from a given point to the corresponding implicit quadric curve/surface. The proposed estimation is based on the height of a tetrahedron, which is used as a coarse but reliable estimation of the real distance. The estimated distance is then used for finding the best set of quadric parameters, by means of the Levenberg-Marquardt algorithm, which is a common framework in other geometric fitting approaches. Comparisons of the proposed approach with previous ones are provided to show both improvements in CPU time as well as in the accuracy of the obtained results.
Address Bordeaux, France
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-04696-4 Medium
Area Expedition Conference (down) ACIVS
Notes ADAS Approved no
Call Number ADAS @ adas @ RoS2009 Serial 1194
Permanent link to this record
 

 
Author Cesar Isaza; Joaquin Salas; Bogdan Raducanu
Title Toward the Detection of Urban Infrastructures Edge Shadows Type Conference Article
Year 2010 Publication 12th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal
Volume 6474 Issue I Pages 30–37
Keywords
Abstract In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising.
Address Sydney, Australia
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor eds. Blanc–Talon et al
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-17687-6 Medium
Area Expedition Conference (down) ACIVS
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ ISR2010 Serial 1458
Permanent link to this record