toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Debora Gil; Antonio Esteban Lansaque; Agnes Borras; Carles Sanchez edit   pdf
url  openurl
  Title Enhancing virtual bronchoscopy with intra-operative data using a multi-objective GAN Type Journal Article
  Year (up) 2019 Publication International Journal of Computer Assisted Radiology and Surgery Abbreviated Journal IJCAR  
  Volume 7 Issue 1 Pages  
  Keywords  
  Abstract This manuscript has been withdrawn by bioRxiv due to upload of an incorrect version of the manuscript by the authors. Therefore, this manuscript should not be cited as reference for this project.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.139; 600.145 Approved no  
  Call Number Admin @ si @ GEB2019 Serial 3307  
Permanent link to this record
 

 
Author David Berga; C. Wloka; JK. Tsotsos edit  url
openurl 
  Title Modeling task influences for saccade sequence and visual relevance prediction Type Journal Article
  Year (up) 2019 Publication Journal of Vision Abbreviated Journal JV  
  Volume 19 Issue 10 Pages 106c-106c  
  Keywords  
  Abstract Previous work from Wloka et al. (2017) presented the Selective Tuning Attentive Reference model Fixation Controller (STAR-FC), an active vision model for saccade prediction. Although the model is able to efficiently predict saccades during free-viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns (Yarbus, 1967). These factors are considered in previous Selective Tuning architectures (Tsotsos and Kruijne, 2014)(Tsotsos, Kotseruba and Wloka, 2016)(Rosenfeld, Biparva & Tsotsos 2017), proposing a way to combine bottom-up and top-down contributions to fixation and saccade programming. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-term memory in combination with stimulus-driven eye movement neuronal correlates. Initial theories and models of these influences include (Rao, Zelinsky, Hayhoe and Ballard, 2002)(Navalpakkam and Itti, 2005)(Huang and Pashler, 2007) and show distinct ways to process the task requirements in combination with bottom-up attention. In this study we extend the STAR-FC with novel computational definitions of Long-Term Memory, Visual Task Executive and a Task Relevance Map. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memory model by processing a hierarchy of visual features learned from salient object detection datasets. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting relevance maps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.128; 600.120 Approved no  
  Call Number Admin @ si @ BWT2019 Serial 3308  
Permanent link to this record
 

 
Author David Berga; Xavier Otazu; Xose R. Fernandez-Vidal; Victor Leboran; Xose M. Pardo edit  openurl
  Title Generating Synthetic Images for Visual Attention Modeling Type Journal Article
  Year (up) 2019 Publication Perception Abbreviated Journal PER  
  Volume 48 Issue Pages 99  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; no menciona Approved no  
  Call Number Admin @ si @ BOF2019 Serial 3309  
Permanent link to this record
 

 
Author Mohammad Naser Sabet; Pau Buch Cardona; Egils Avots; Kamal Nasrollahi; Sergio Escalera; Thomas B. Moeslund; Gholamreza Anbarjafari edit  url
doi  openurl
  Title Privacy-Constrained Biometric System for Non-cooperative Users Type Journal Article
  Year (up) 2019 Publication Entropy Abbreviated Journal ENTROPY  
  Volume 21 Issue 11 Pages 1033  
  Keywords biometric recognition; multimodal-based human identification; privacy; deep learning  
  Abstract With the consolidation of the new data protection regulation paradigm for each individual within the European Union (EU), major biometric technologies are now confronted with many concerns related to user privacy in biometric deployments. When individual biometrics are disclosed, the sensitive information about his/her personal data such as financial or health are at high risk of being misused or compromised. This issue can be escalated considerably over scenarios of non-cooperative users, such as elderly people residing in care homes, with their inability to interact conveniently and securely with the biometric system. The primary goal of this study is to design a novel database to investigate the problem of automatic people recognition under privacy constraints. To do so, the collected data-set contains the subject’s hand and foot traits and excludes the face biometrics of individuals in order to protect their privacy. We carried out extensive simulations using different baseline methods, including deep learning. Simulation results show that, with the spatial features extracted from the subject sequence in both individual hand or foot videos, state-of-the-art deep models provide promising recognition performance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ NBA2019 Serial 3313  
Permanent link to this record
 

 
Author Juanjo Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger edit  url
doi  openurl
  Title Multi-class structural damage segmentation using fully convolutional networks Type Journal Article
  Year (up) 2019 Publication Computers in Industry Abbreviated Journal COMPUTIND  
  Volume 112 Issue Pages 103121  
  Keywords Bridge damage detection; Deep learning; Semantic segmentation  
  Abstract Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ RKL2019 Serial 3315  
Permanent link to this record
 

 
Author Egils Avots; Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Baro; Paul Pallin; Gholamreza Anbarjafari edit   pdf
url  doi
openurl 
  Title From 2D to 3D geodesic-based garment matching Type Journal Article
  Year (up) 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 78 Issue 18 Pages 25829–25853  
  Keywords Shape matching; Geodesic distance; Texture mapping; RGBD image processing; Gaussian mixture model  
  Abstract A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by root mean square error (RMS) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE; 600.098; 600.119; 602.133 Approved no  
  Call Number Admin @ si @ AME2019 Serial 3317  
Permanent link to this record
 

 
Author Andre Litvin; Kamal Nasrollahi; Sergio Escalera; Cagri Ozcinar; Thomas B. Moeslund; Gholamreza Anbarjafari edit  url
openurl 
  Title A Novel Deep Network Architecture for Reconstructing RGB Facial Images from Thermal for Face Recognition Type Journal Article
  Year (up) 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 78 Issue 18 Pages 25259–25271  
  Keywords Fully convolutional networks; FusionNet; Thermal imaging; Face recognition  
  Abstract This work proposes a fully convolutional network architecture for RGB face image generation from a given input thermal face image to be applied in face recognition scenarios. The proposed method is based on the FusionNet architecture and increases robustness against overfitting using dropout after bridge connections, randomised leaky ReLUs (RReLUs), and orthogonal regularization. Furthermore, we propose to use a decoding block with resize convolution instead of transposed convolution to improve final RGB face image generation. To validate our proposed network architecture, we train a face classifier and compare its face recognition rate on the reconstructed RGB images from the proposed architecture, to those when reconstructing images with the original FusionNet, as well as when using the original RGB images. As a result, we are introducing a new architecture which leads to a more accurate network.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no menciona Approved no  
  Call Number Admin @ si @ LNE2019 Serial 3318  
Permanent link to this record
 

 
Author Ikechukwu Ofodile; Ahmed Helmi; Albert Clapes; Egils Avots; Kerttu Maria Peensoo; Sandhra Mirella Valdma; Andreas Valdmann; Heli Valtna Lukner; Sergey Omelkov; Sergio Escalera; Cagri Ozcinar; Gholamreza Anbarjafari edit  url
doi  openurl
  Title Action recognition using single-pixel time-of-flight detection Type Journal Article
  Year (up) 2019 Publication Entropy Abbreviated Journal ENTROPY  
  Volume 21 Issue 4 Pages 414  
  Keywords single pixel single photon image acquisition; time-of-flight; action recognition  
  Abstract Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene.
Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47% accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent
neural network.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ OHC2019 Serial 3319  
Permanent link to this record
 

 
Author Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Active Learning for Deep Detection Neural Networks Type Conference Article
  Year (up) 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 3672-3680  
  Keywords  
  Abstract The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection.  
  Address Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 Approved no  
  Call Number Admin @ si @ AGW2019 Serial 3321  
Permanent link to this record
 

 
Author Felipe Codevilla; Eder Santana; Antonio Lopez; Adrien Gaidon edit   pdf
url  doi
openurl 
  Title Exploring the Limitations of Behavior Cloning for Autonomous Driving Type Conference Article
  Year (up) 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 9328-9337  
  Keywords  
  Abstract Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, executing complex lateral and longitudinal maneuvers, even in unseen environments, without being explicitly programmed to do so. However, we confirm some limitations of the behavior cloning approach: some well-known limitations (eg, dataset bias and overfitting), new generalization issues (eg, dynamic objects and the lack of a causal modeling), and training instabilities, all requiring further research before behavior cloning can graduate to real-world driving. The code, dataset, benchmark, and agent studied in this paper can be found at github.  
  Address Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS; 600.124; 600.118 Approved no  
  Call Number Admin @ si @ CSL2019 Serial 3322  
Permanent link to this record
 

 
Author Zhengying Liu; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera; Adrien Pavao; Hugo Jair Escalante; Wei-Wei Tu; Zhen Xu; Sebastien Treguer edit   pdf
url  openurl
  Title AutoCV Challenge Design and Baseline Results Type Conference Article
  Year (up) 2019 Publication La Conference sur l’Apprentissage Automatique Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.  
  Address Toulouse; Francia; July 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ LGJ2019 Serial 3323  
Permanent link to this record
 

 
Author Reza Azad; Maryam Asadi Aghbolaghi; Mahmood Fathy; Sergio Escalera edit   pdf
url  doi
openurl 
  Title Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions Type Conference Article
  Year (up) 2019 Publication Visual Recognition for Medical Images workshop Abbreviated Journal  
  Volume Issue Pages 406-415  
  Keywords  
  Abstract In recent years, deep learning-based networks have achieved state-of-the-art performance in medical image segmentation. Among the existing networks, U-Net has been successfully applied on medical image segmentation. In this paper, we propose an extension of U-Net, Bi-directional ConvLSTM U-Net with Densely connected convolutions (BCDU-Net), for medical image segmentation, in which we take full advantages of U-Net, bi-directional ConvLSTM (BConvLSTM) and the mechanism of dense convolutions. Instead of a simple concatenation in the skip connection of U-Net, we employ BConvLSTM to combine the feature maps extracted from the corresponding encoding path and the previous decoding up-convolutional layer in a non-linear way. To strengthen feature propagation and encourage feature reuse, we use densely connected convolutions in the last convolutional layer of the encoding path. Finally, we can accelerate the convergence speed of the proposed network by employing batch normalization (BN). The proposed model is evaluated on three datasets of: retinal blood vessel segmentation, skin lesion segmentation, and lung nodule segmentation, achieving state-of-the-art performance.  
  Address Seul; Korea; October 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ AAF2019 Serial 3324  
Permanent link to this record
 

 
Author Maria Ines Torres; Javier Mikel Olaso; Cesar Montenegro; Riberto Santana; A.Vazquez; Raquel Justo; J.A.Lozano; Stephan Schogl; Gerard Chollet; Nazim Dugan; M.Irvine; N.Glackin; C.Pickard; Anna Esposito; Gennaro Cordasco; Alda Troncone; Dijana Petrovska Delacretaz; Aymen Mtibaa; Mohamed Amine Hmani; M.S.Korsnes; L.J.Martinussen; Sergio Escalera; C.Palmero Cantariño; Olivier Deroo; O.Gordeeva; Jofre Tenorio Laranga; E.Gonzalez Fraile; Begoña Fernandez Ruanova; A.Gonzalez Pinto edit   pdf
url  openurl
  Title The EMPATHIC project: mid-term achievements Type Conference Article
  Year (up) 2019 Publication 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments Abbreviated Journal  
  Volume Issue Pages 629-638  
  Keywords  
  Abstract Maria Ines Torres; Javier Mikel Olaso, César Montenegro, Riberto Santana, A. Vázquez, Raquel Justo, J. A. Lozano, Stephan Schlögl, Gérard Chollet, Nazim Dugan, M. Irvine, N. Glackin, C. Pickard, Anna Esposito, Gennaro Cordasco, Alda Troncone, Dijana Petrovska-Delacrétaz, Aymen Mtibaa, Mohamed Amine Hmani, M. S. Korsnes, L. J. Martinussen, Sergio Escalera, C. Palmero Cantariño, Olivier Deroo, O. Gordeeva, Jofre Tenorio-Laranga, E. Gonzalez-Fraile, Begoña Fernández-Ruanova, A. Gonzalez-Pinto  
  Address Rhodes Greece; June 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PETRA  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ TOM2019 Serial 3325  
Permanent link to this record
 

 
Author Daniel Sanchez; Meysam Madadi; Marc Oliu; Sergio Escalera edit   pdf
url  doi
openurl 
  Title Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation Type Conference Article
  Year (up) 2019 Publication 14th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles.  
  Address Lille; France; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FG  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ SMO2019 Serial 3326  
Permanent link to this record
 

 
Author Sergio Escalera; Marti Soler; Stephane Ayache; Umut Guçlu; Jun Wan; Meysam Madadi; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon edit  url
openurl 
  Title ChaLearn Looking at People: Inpainting and Denoising Challenges Type Book Chapter
  Year (up) 2019 Publication The Springer Series on Challenges in Machine Learning Abbreviated Journal  
  Volume Issue Pages 23-44  
  Keywords  
  Abstract Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence. However, in the context of computer vision, the problem has only been studied in specific scenarios (e.g., certain types of occlusions in specific types of images), although it is common to have incomplete information in visual data. This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018. The ChaLearn Looking at People Inpainting Challenge aimed at advancing the state of the art on visual inpainting by promoting the development of methods for recovering missing and occluded information from images and video. Three tracks were proposed in which visual inpainting might be helpful but still challenging: human body pose estimation, text overlays removal and fingerprint denoising. This chapter describes the design of the challenge, which includes the release of three novel datasets, and the description of evaluation metrics, baselines and evaluation protocol. The results of the challenge are analyzed and discussed in detail and conclusions derived from this event are outlined.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ ESA2019 Serial 3327  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: