toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links (down)
Author Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Andrew Bagdanov; Michael Felsberg; Jorma edit   pdf
url  openurl
  Title Scale coding bag of deep features for human attribute and action recognition Type Journal Article
  Year 2018 Publication Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 29 Issue 1 Pages 55-71  
  Keywords Action recognition; Attribute recognition; Bag of deep features  
  Abstract Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.068; 600.079; 600.106; 600.120 Approved no  
  Call Number Admin @ si @ KWR2018 Serial 3107  
Permanent link to this record
 

 
Author F. Javier Sanchez; Jorge Bernal; Cristina Sanchez Montes; Cristina Rodriguez de Miguel; Gloria Fernandez Esparrach edit   pdf
url  openurl
  Title Bright spot regions segmentation and classification for specular highlights detection in colonoscopy videos Type Journal Article
  Year 2017 Publication Machine Vision and Applications Abbreviated Journal MVAP  
  Volume Issue Pages 1-20  
  Keywords Specular highlights; bright spot regions segmentation; region classification; colonoscopy  
  Abstract A novel specular highlights detection method in colonoscopy videos is presented. The method is based on a model of appearance dening specular
highlights as bright spots which are highly contrasted with respect to adjacent regions. Our approach proposes two stages; segmentation, and then classication
of bright spot regions. The former denes a set of candidate regions obtained through a region growing process with local maxima as initial region seeds. This process creates a tree structure which keeps track, at each growing iteration, of the region frontier contrast; nal regions provided depend on restrictions over contrast value. Non-specular regions are ltered through a classication stage performed by a linear SVM classier using model-based features from each region. We introduce a new validation database with more than 25; 000 regions along with their corresponding pixel-wise annotations. We perform a comparative study against other approaches. Results show that our method is superior to other approaches, with our segmented regions being
closer to actual specular regions in the image. Finally, we also present how our methodology can also be used to obtain an accurate prediction of polyp histology.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; 600.096; 600.175 Approved no  
  Call Number Admin @ si @ SBS2017 Serial 2975  
Permanent link to this record
 

 
Author Mariella Dimiccoli; Jean-Pascal Jacob; Lionel Moisan edit   pdf
url  openurl
  Title Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach Type Journal Article
  Year 2016 Publication Journal of Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 27 Issue Pages 511-527  
  Keywords particle detection; particle tracking; a-contrario approach; time-lapse fluorescence imaging  
  Abstract In this work, we propose a probabilistic approach for the detection and the
tracking of particles on biological images. In presence of very noised and poor
quality data, particles and trajectories can be characterized by an a-contrario
model, that estimates the probability of observing the structures of interest
in random data. This approach, first introduced in the modeling of human visual
perception and then successfully applied in many image processing tasks, leads
to algorithms that do not require a previous learning stage, nor a tedious
parameter tuning and are very robust to noise. Comparative evaluations against
a well established baseline show that the proposed approach outperforms the
state of the art.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ DJM2016 Serial 2735  
Permanent link to this record
 

 
Author Razieh Rastgoo; Kourosh Kiani; Sergio Escalera edit  url
doi  openurl
  Title Real-time Isolated Hand Sign Language RecognitioN Using Deep Networks and SVD Type Journal
  Year 2022 Publication Journal of Ambient Intelligence and Humanized Computing Abbreviated Journal  
  Volume 13 Issue Pages 591–611  
  Keywords  
  Abstract One of the challenges in computer vision models, especially sign language, is real-time recognition. In this work, we present a simple yet low-complex and efficient model, comprising single shot detector, 2D convolutional neural network, singular value decomposition (SVD), and long short term memory, to real-time isolated hand sign language recognition (IHSLR) from RGB video. We employ the SVD method as an efficient, compact, and discriminative feature extractor from the estimated 3D hand keypoints coordinators. Despite the previous works that employ the estimated 3D hand keypoints coordinates as raw features, we propose a novel and revolutionary way to apply the SVD to the estimated 3D hand keypoints coordinates to get more discriminative features. SVD method is also applied to the geometric relations between the consecutive segments of each finger in each hand and also the angles between these sections. We perform a detailed analysis of recognition time and accuracy. One of our contributions is that this is the first time that the SVD method is applied to the hand pose parameters. Results on four datasets, RKS-PERSIANSIGN (99.5±0.04), First-Person (91±0.06), ASVID (93±0.05), and isoGD (86.1±0.04), confirm the efficiency of our method in both accuracy (mean+std) and time recognition. Furthermore, our model outperforms or gets competitive results with the state-of-the-art alternatives in IHSLR and hand action recognition.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ RKE2022a Serial 3660  
Permanent link to this record
 

 
Author Sergio Escalera; Jordi Gonzalez; Hugo Jair Escalante; Xavier Baro; Isabelle Guyon edit  url
openurl 
  Title Looking at People Special Issue Type Journal Article
  Year 2018 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 126 Issue 2-4 Pages 141-143  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; ISE; 600.119 Approved no  
  Call Number Admin @ si @ EGJ2018 Serial 3093  
Permanent link to this record
 

 
Author Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli edit   pdf
url  openurl
  Title Batch-based activity recognition from egocentric photo-streams revisited Type Journal Article
  Year 2018 Publication Pattern Analysis and Applications Abbreviated Journal PAA  
  Volume 21 Issue 4 Pages 953–965  
  Keywords Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks  
  Abstract Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ CMR2018 Serial 3186  
Permanent link to this record
 

 
Author Domicele Jonauskaite; Nele Dael; C. Alejandro Parraga; Laetitia Chevre; Alejandro Garcia Sanchez; Christine Mohr edit   pdf
url  doi
openurl 
  Title Stripping #The Dress: The importance of contextual information on inter-individual differences in colour perception Type Journal Article
  Year 2018 Publication Psychological Research Abbreviated Journal PSYCHO R  
  Volume Issue Pages 1-15  
  Keywords  
  Abstract In 2015, a picture of a Dress (henceforth the Dress) triggered popular and scientific interest; some reported seeing the Dress in white and gold (W&G) and others in blue and black (B&B). We aimed to describe the phenomenon and investigate the role of contextualization. Few days after the Dress had appeared on the Internet, we projected it to 240 students on two large screens in the classroom. Participants reported seeing the Dress in B&B (48%), W&G (38%), or blue and brown (B&Br; 7%). Amongst numerous socio-demographic variables, we only observed that W&G viewers were most likely to have always seen the Dress as W&G. In the laboratory, we tested how much contextual information is necessary for the phenomenon to occur. Fifty-seven participants selected colours most precisely matching predominant colours of parts or the full Dress. We presented, in this order, small squares (a), vertical strips (b), and the full Dress (c). We found that (1) B&B, B&Br, and W&G viewers had selected colours differing in lightness and chroma levels for contextualized images only (b, c conditions) and hue for fully contextualized condition only (c) and (2) B&B viewers selected colours most closely matching displayed colours of the Dress. Thus, the Dress phenomenon emerges due to inter-individual differences in subjectively perceived lightness, chroma, and hue, at least when all aspects of the picture need to be integrated. Our results support the previous conclusions that contextual information is key to colour perception; it should be important to understand how this actually happens.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; no proj Approved no  
  Call Number Admin @ si @ JDP2018 Serial 3149  
Permanent link to this record
 

 
Author Alex Gomez-Villa; Adrian Martin; Javier Vazquez; Marcelo Bertalmio; Jesus Malo edit  url
doi  openurl
  Title On the synthesis of visual illusions using deep generative models Type Journal Article
  Year 2022 Publication Journal of Vision Abbreviated Journal JOV  
  Volume 22(8) Issue 2 Pages 1-18  
  Keywords  
  Abstract Visual illusions expand our understanding of the visual system by imposing constraints in the models in two different ways: i) visual illusions for humans should induce equivalent illusions in the model, and ii) illusions synthesized from the model should be compelling for human viewers too. These constraints are alternative strategies to find good vision models. Following the first research strategy, recent studies have shown that artificial neural network architectures also have human-like illusory percepts when stimulated with classical hand-crafted stimuli designed to fool humans. In this work we focus on the second (less explored) strategy: we propose a framework to synthesize new visual illusions using the optimization abilities of current automatic differentiation techniques. The proposed framework can be used with classical vision models as well as with more recent artificial neural network architectures. This framework, validated by psychophysical experiments, can be used to study the difference between a vision model and the actual human perception and to optimize the vision model to decrease this difference.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.161; 611.007 Approved no  
  Call Number Admin @ si @ GMV2022 Serial 3682  
Permanent link to this record
 

 
Author Trevor Canham; Javier Vazquez; Elise Mathieu; Marcelo Bertalmío edit   pdf
url  doi
openurl 
  Title Matching visual induction effects on screens of different size Type Journal Article
  Year 2021 Publication Journal of Vision Abbreviated Journal JOV  
  Volume 21 Issue 6(10) Pages 1-22  
  Keywords  
  Abstract In the film industry, the same movie is expected to be watched on displays of vastly different sizes, from cinema screens to mobile phones. But visual induction, the perceptual phenomenon by which the appearance of a scene region is affected by its surroundings, will be different for the same image shown on two displays of different dimensions. This phenomenon presents a practical challenge for the preservation of the artistic intentions of filmmakers, because it can lead to shifts in image appearance between viewing destinations. In this work, we show that a neural field model based on the efficient representation principle is able to predict induction effects and how, by regularizing its associated energy functional, the model is still able to represent induction but is now invertible. From this finding, we propose a method to preprocess an image in a screen–size dependent way so that its perception, in terms of visual induction, may remain constant across displays of different size. The potential of the method is demonstrated through psychophysical experiments on synthetic images and qualitative examples on natural images.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number Admin @ si @ CVM2021 Serial 3595  
Permanent link to this record
 

 
Author David Berga; C. Wloka; JK. Tsotsos edit  url
openurl 
  Title Modeling task influences for saccade sequence and visual relevance prediction Type Journal Article
  Year 2019 Publication Journal of Vision Abbreviated Journal JV  
  Volume 19 Issue 10 Pages 106c-106c  
  Keywords  
  Abstract Previous work from Wloka et al. (2017) presented the Selective Tuning Attentive Reference model Fixation Controller (STAR-FC), an active vision model for saccade prediction. Although the model is able to efficiently predict saccades during free-viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns (Yarbus, 1967). These factors are considered in previous Selective Tuning architectures (Tsotsos and Kruijne, 2014)(Tsotsos, Kotseruba and Wloka, 2016)(Rosenfeld, Biparva & Tsotsos 2017), proposing a way to combine bottom-up and top-down contributions to fixation and saccade programming. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-term memory in combination with stimulus-driven eye movement neuronal correlates. Initial theories and models of these influences include (Rao, Zelinsky, Hayhoe and Ballard, 2002)(Navalpakkam and Itti, 2005)(Huang and Pashler, 2007) and show distinct ways to process the task requirements in combination with bottom-up attention. In this study we extend the STAR-FC with novel computational definitions of Long-Term Memory, Visual Task Executive and a Task Relevance Map. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memory model by processing a hierarchy of visual features learned from salient object detection datasets. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting relevance maps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.128; 600.120 Approved no  
  Call Number Admin @ si @ BWT2019 Serial 3308  
Permanent link to this record
 

 
Author Hans Stadthagen-Gonzalez; M. Carmen Parafita; C. Alejandro Parraga; Markus F. Damian edit   pdf
url  openurl
  Title Testing alternative theoretical accounts of code-switching: Insights from comparative judgments of adjective noun order Type Journal Article
  Year 2019 Publication International journal of bilingualism: interdisciplinary studies of multilingual behaviour Abbreviated Journal IJB  
  Volume 23 Issue 1 Pages 200-220  
  Keywords  
  Abstract Objectives:
Spanish and English contrast in adjective–noun word order: for example, brown dress (English) vs. vestido marrón (‘dress brown’, Spanish). According to the Matrix Language model (MLF) word order in code-switched sentences must be compatible with the word order of the matrix language, but working within the minimalist program (MP), Cantone and MacSwan arrived at the descriptive generalization that the position of the noun phrase relative to the adjective is determined by the adjective’s language. Our aim is to evaluate the predictions derived from these two models regarding adjective–noun order in Spanish–English code-switched sentences.
Methodology:
We contrasted the predictions from both models regarding the acceptability of code-switched sentences with different adjective–noun orders that were compatible with the MP, the MLF, both, or none. Acceptability was assessed in Experiment 1 with a 5-point Likert and in Experiment 2 with a 2-Alternative Forced Choice (2AFC) task.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; no menciona Approved no  
  Call Number Admin @ si @ SPP2019 Serial 3242  
Permanent link to this record
 

 
Author Debora Gil; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell edit   pdf
url  doi
openurl 
  Title Segmentation of Distal Airways using Structural Analysis Type Journal Article
  Year 2019 Publication PloS one Abbreviated Journal Plos  
  Volume 14 Issue 12 Pages  
  Keywords  
  Abstract Segmentation of airways in Computed Tomography (CT) scans is a must for accurate support of diagnosis and intervention of many pulmonary disorders. In particular, lung cancer diagnosis would benefit from segmentations reaching most distal airways. We present a method that combines descriptors of bronchi local appearance and graph global structural analysis to fine-tune thresholds on the descriptors adapted for each bronchial level. We have compared our method to the top performers of the EXACT09 challenge and to a commercial software for biopsy planning evaluated in an own-collected data-base of high resolution CT scans acquired under different breathing conditions. Results on EXACT09 data show that our method provides a high leakage reduction with minimum loss in airway detection. Results on our data-base show the reliability across varying breathing conditions and a competitive performance for biopsy planning compared to a commercial solution.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.139; 600.145 Approved no  
  Call Number Admin @ si @ GSB2019 Serial 3357  
Permanent link to this record
 

 
Author Vacit Oguz Yazici; Joost Van de Weijer; Longlong Yu edit   pdf
url  doi
openurl 
  Title Visual Transformers with Primal Object Queries for Multi-Label Image Classification Type Conference Article
  Year 2022 Publication 26th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Multi-label image classification is about predicting a set of class labels that can be considered as orderless sequential data. Transformers process the sequential data as a whole, therefore they are inherently good at set prediction. The first vision-based transformer model, which was proposed for the object detection task introduced the concept of object queries. Object queries are learnable positional encodings that are used by attention modules in decoder layers to decode the object classes or bounding boxes using the region of interests in an image. However, inputting the same set of object queries to different decoder layers hinders the training: it results in lower performance and delays convergence. In this paper, we propose the usage of primal object queries that are only provided at the start of the transformer decoder stack. In addition, we improve the mixup technique proposed for multi-label classification. The proposed transformer model with primal object queries improves the state-of-the-art class wise F1 metric by 2.1% and 1.8%; and speeds up the convergence by 79.0% and 38.6% on MS-COCO and NUS-WIDE datasets respectively.  
  Address Montreal; Quebec; Canada; August 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes LAMP; 600.147; 601.309 Approved no  
  Call Number Admin @ si @ YWY2022 Serial 3786  
Permanent link to this record
 

 
Author Ayan Banerjee; Palaiahnakote Shivakumara; Parikshit Acharya; Umapada Pal; Josep Llados edit  url
doi  openurl
  Title TWD: A New Deep E2E Model for Text Watermark Detection in Video Images Type Conference Article
  Year 2022 Publication 26th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords Deep learning; U-Net; FCENet; Scene text detection; Video text detection; Watermark text detection  
  Abstract Text watermark detection in video images is challenging because text watermark characteristics are different from caption and scene texts in the video images. Developing a successful model for detecting text watermark, caption, and scene texts is an open challenge. This study aims at developing a new Deep End-to-End model for Text Watermark Detection (TWD), caption and scene text in video images. To standardize non-uniform contrast, quality, and resolution, we explore the U-Net3+ model for enhancing poor quality text without affecting high-quality text. Similarly, to address the challenges of arbitrary orientation, text shapes and complex background, we explore Stacked Hourglass Encoded Fourier Contour Embedding Network (SFCENet) by feeding the output of the U-Net3+ model as input. Furthermore, the proposed work integrates enhancement and detection models as an end-to-end model for detecting multi-type text in video images. To validate the proposed model, we create our own dataset (named TW-866), which provides video images containing text watermark, caption (subtitles), as well as scene text. The proposed model is also evaluated on standard natural scene text detection datasets, namely, ICDAR 2019 MLT, CTW1500, Total-Text, and DAST1500. The results show that the proposed method outperforms the existing methods. This is the first work on text watermark detection in video images to the best of our knowledge  
  Address Montreal; Quebec; Canada; August 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes DAG; Approved no  
  Call Number Admin @ si @ BSA2022 Serial 3788  
Permanent link to this record
 

 
Author Aitor Alvarez-Gila; Joost Van de Weijer; Yaxing Wang; Estibaliz Garrote edit   pdf
url  doi
openurl 
  Title MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation Type Conference Article
  Year 2022 Publication 29th IEEE International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages  
  Keywords multi-view; cross-view; semantic segmentation; synthetic dataset  
  Abstract We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups 1 .  
  Address Bordeaux; France; October2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes LAMP Approved no  
  Call Number Admin @ si @ AWW2022 Serial 3781  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: