Home | [51–60] << 61 62 63 64 65 66 67 68 69 70 >> [71–80] |
Records | |||||
---|---|---|---|---|---|
Author | Zhong Jin; Franck Davoine; Zhen Lou | ||||
Title | An Effective EM Algorithm for PCA Mixture Model | Type | Miscellaneous | ||
Year | 2004 | Publication | Structural and Statistical Pattern Recognition, Lecture Notes in Computer Science, 3138:626–634 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Lisbon, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ JDL2004 | Serial | 482 | ||
Permanent link to this record | |||||
Author | Javier Rodenas; Bhalaji Nagarajan; Marc Bolaños; Petia Radeva | ||||
Title | Learning Multi-Subset of Classes for Fine-Grained Food Recognition | Type | Conference Article | ||
Year | 2022 | Publication | 7th International Workshop on Multimedia Assisted Dietary Management | Abbreviated Journal | |
Volume | Issue | Pages | 17–26 | ||
Keywords | |||||
Abstract | Food image recognition is a complex computer vision task, because of the large number of fine-grained food classes. Fine-grained recognition tasks focus on learning subtle discriminative details to distinguish similar classes. In this paper, we introduce a new method to improve the classification of classes that are more difficult to discriminate based on Multi-Subsets learning. Using a pre-trained network, we organize classes in multiple subsets using a clustering technique. Later, we embed these subsets in a multi-head model structure. This structure has three distinguishable parts. First, we use several shared blocks to learn the generalized representation of the data. Second, we use multiple specialized blocks focusing on specific subsets that are difficult to distinguish. Lastly, we use a fully connected layer to weight the different subsets in an end-to-end manner by combining the neuron outputs. We validated our proposed method using two recent state-of-the-art vision transformers on three public food recognition datasets. Our method was successful in learning the confused classes better and we outperformed the state-of-the-art on the three datasets. | ||||
Address | Lisboa; Portugal; October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MADiMa | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ RNB2022 | Serial | 3797 | ||
Permanent link to this record | |||||
Author | Silvio Giancola; Anthony Cioppa; Adrien Deliege; Floriane Magera; Vladimir Somers; Le Kang; Xin Zhou; Olivier Barnich; Christophe De Vleeschouwer; Alexandre Alahi; Bernard Ghanem; Marc Van Droogenbroeck; Abdulrahman Darwish; Adrien Maglo; Albert Clapes; Andreas Luyts; Andrei Boiarov; Artur Xarles; Astrid Orcesi; Avijit Shah; Baoyu Fan; Bharath Comandur; Chen Chen; Chen Zhang; Chen Zhao; Chengzhi Lin; Cheuk-Yiu Chan; Chun Chuen Hui; Dengjie Li; Fan Yang; Fan Liang; Fang Da; Feng Yan; Fufu Yu; Guanshuo Wang; H. Anthony Chan; He Zhu; Hongwei Kan; Jiaming Chu; Jianming Hu; Jianyang Gu; Jin Chen; Joao V. B. Soares; Jonas Theiner; Jorge De Corte; Jose Henrique Brito; Jun Zhang; Junjie Li; Junwei Liang; Leqi Shen; Lin Ma; Lingchi Chen; Miguel Santos Marques; Mike Azatov; Nikita Kasatkin; Ning Wang; Qiong Jia; Quoc Cuong Pham; Ralph Ewerth; Ran Song; Rengang Li; Rikke Gade; Ruben Debien; Runze Zhang; Sangrok Lee; Sergio Escalera; Shan Jiang; Shigeyuki Odashima; Shimin Chen; Shoichi Masui; Shouhong Ding; Sin-wai Chan; Siyu Chen; Tallal El-Shabrawy; Tao He; Thomas B. Moeslund; Wan-Chi Siu; Wei Zhang; Wei Li; Xiangwei Wang; Xiao Tan; Xiaochuan Li; Xiaolin Wei; Xiaoqing Ye; Xing Liu; Xinying Wang; Yandong Guo; Yaqian Zhao; Yi Yu; Yingying Li; Yue He; Yujie Zhong; Zhenhua Guo; Zhiheng Li | ||||
Title | SoccerNet 2022 Challenges Results | Type | Conference Article | ||
Year | 2022 | Publication | 5th International ACM Workshop on Multimedia Content Analysis in Sports | Abbreviated Journal | |
Volume | Issue | Pages | 75-86 | ||
Keywords | |||||
Abstract | The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on this https URL. Baselines and development kits are available on this https URL. | ||||
Address | Lisboa; Portugal; October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACMW | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ GCD2022 | Serial | 3801 | ||
Permanent link to this record | |||||
Author | Carlos David Martinez Hinarejos; Josep Llados; Alicia Fornes; Francisco Casacuberta; Lluis de Las Heras; Joan Mas; Moises Pastor; Oriol Ramos Terrades; Joan Andreu Sanchez; Enrique Vidal; Fernando Vilariño | ||||
Title | Context, multimodality, and user collaboration in handwritten text processing: the CoMUN-HaT project | Type | Conference Article | ||
Year | 2016 | Publication | 3rd IberSPEECH | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Processing of handwritten documents is a task that is of wide interest for many
purposes, such as those related to preserve cultural heritage. Handwritten text recognition techniques have been successfully applied during the last decade to obtain transcriptions of handwritten documents, and keyword spotting techniques have been applied for searching specific terms in image collections of handwritten documents. However, results on transcription and indexing are far from perfect. In this framework, the use of new data sources arises as a new paradigm that will allow for a better transcription and indexing of handwritten documents. Three main different data sources could be considered: context of the document (style, writer, historical time, topics,. . . ), multimodal data (representations of the document in a different modality, such as the speech signal of the dictation of the text), and user feedback (corrections, amendments,. . . ). The CoMUN-HaT project aims at the integration of these different data sources into the transcription and indexing task for handwritten documents: the use of context derived from the analysis of the documents, how multimodality can aid the recognition process to obtain more accurate transcriptions (including transcription in a modern version of the language), and integration into a userin-the-loop assisted text transcription framework. This will be reflected in the construction of a transcription and indexing platform that can be used by both professional and nonprofessional users, contributing to crowd-sourcing activities to preserve cultural heritage and to obtain an accessible version of the involved corpus. |
||||
Address | Lisboa; Portugal; November 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IberSPEECH | ||
Notes | DAG; MV; 600.097;SIAI | Approved | no | ||
Call Number | Admin @ si @MLF2016 | Serial | 2813 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias | ||||
Title | Scene Representations for Autonomous Driving: an approach based on polygonal primitives | Type | Conference Article | ||
Year | 2015 | Publication | 2nd Iberian Robotics Conference ROBOT2015 | Abbreviated Journal | |
Volume | 417 | Issue | Pages | 503-515 | |
Keywords | Scene reconstruction; Point cloud; Autonomous vehicles | ||||
Abstract | In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques. |
||||
Address | Lisboa; Portugal; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ROBOT | ||
Notes | ADAS; 600.076; 600.086 | Approved | no | ||
Call Number | Admin @ si @ OSS2015a | Serial | 2662 | ||
Permanent link to this record | |||||
Author | J.Poujol; Cristhian A. Aguilera-Carrasco; E.Danos; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa | ||||
Title | Visible-Thermal Fusion based Monocular Visual Odometry | Type | Conference Article | ||
Year | 2015 | Publication | 2nd Iberian Robotics Conference ROBOT2015 | Abbreviated Journal | |
Volume | 417 | Issue | Pages | 517-528 | |
Keywords | Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion. | ||||
Abstract | The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach. |
||||
Address | Lisboa; Portugal; November 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2194-5357 | ISBN | 978-3-319-27145-3 | Medium | |
Area | Expedition | Conference | ROBOT | ||
Notes | ADAS; 600.076; 600.086 | Approved | no | ||
Call Number | Admin @ si @ PAD2015 | Serial | 2663 | ||
Permanent link to this record | |||||
Author | Angel Sappa; Patricia Suarez; Henry Velesaca; Dario Carpio | ||||
Title | Domain Adaptation in Image Dehazing: Exploring the Usage of Images from Virtual Scenarios | Type | Conference Article | ||
Year | 2022 | Publication | 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 85-92 | ||
Keywords | Domain adaptation; Synthetic hazed dataset; Dehazing | ||||
Abstract | This work presents a novel domain adaptation strategy for deep learning-based approaches to solve the image dehazing
problem. Firstly, a large set of synthetic images is generated by using a realistic 3D graphic simulator; these synthetic images contain different densities of haze, which are used for training the model that is later adapted to any real scenario. The adaptation process requires just a few images to fine-tune the model parameters. The proposed strategy allows overcoming the limitation of training a given model with few images. In other words, the proposed strategy implements the adaptation of a haze removal model trained with synthetic images to real scenarios. It should be noticed that it is quite difficult, if not impossible, to have large sets of pairs of real-world images (with and without haze) to train in a supervised way dehazing algorithms. Experimental results are provided showing the validity of the proposed domain adaptation strategy. |
||||
Address | Lisboa; Portugal; July 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CGVCVIP | ||
Notes | MSIAU; no proj | Approved | no | ||
Call Number | Admin @ si @ SSV2022 | Serial | 3804 | ||
Permanent link to this record | |||||
Author | Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi | ||||
Title | Using ORB, BoW and SVM to identificate and track tagged Norway lobster Nephrops Norvegicus (L.) | Type | Conference Article | ||
Year | 2016 | Publication | 3rd International Conference on Maritime Technology and Engineering | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Sustainable capture policies of many species strongly depend on the understanding of their social behaviour. Nevertheless, the analysis of emergent behaviour in marine species poses several challenges. Usually animals are captured and observed in tanks, and their behaviour is inferred from their dynamics and interactions. Therefore, researchers must deal with thousands of hours of video data. Without loss of generality, this paper proposes a computer
vision approach to identify and track specific species, the Norway lobster, Nephrops norvegicus. We propose an identification scheme were animals are marked using black and white tags with a geometric shape in the center (holed triangle, filled triangle, holed circle and filled circle). Using a massive labelled dataset; we extract local features based on the ORB descriptor. These features are a posteriori clustered, and we construct a Bag of Visual Words feature vector per animal. This approximation yields us invariance to rotation and translation. A SVM classifier achieves generalization results above 99%. In a second contribution, we will make the code and training data publically available. |
||||
Address | Lisboa; Portugal; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MARTECH | ||
Notes | OR;MV; | Approved | no | ||
Call Number | Admin @ si @ GMS2016b | Serial | 2817 | ||
Permanent link to this record | |||||
Author | P. Ricaurte; C. Chilan; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa | ||||
Title | Performance Evaluation of Feature Point Descriptors in the Infrared Domain | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 545-550 | |
Keywords | Infrared Imaging; Feature Point Descriptors | ||||
Abstract | This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered. | ||||
Address | Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ RCA2014b | Serial | 2476 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa | ||||
Title | Cross-spectral Stereo Correspondence using Dense Flow Fields | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 613-617 | |
Keywords | Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum | ||||
Abstract | This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach. | ||||
Address | Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OAV2014 | Serial | 2477 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Felipe Lumbreras; Angel Sappa | ||||
Title | A General-purpose Crowdsourcing Platform for Mobile Devices | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 211-215 | |
Keywords | Crowdsourcing Platform; Mobile Crowdsourcing | ||||
Abstract | This paper presents details of a general purpose micro-task on-demand platform based on the crowdsourcing philosophy. This platform was specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquity and iii) embedded sensors. The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks. Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and tasksolver). Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way. Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications. Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform. | ||||
Address | Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ISE; ADAS; 600.054; 600.055; 600.076; 600.078 | Approved | no | ||
Call Number | Admin @ si @ ALS2014 | Serial | 2478 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Angel Sappa | ||||
Title | Toward a Thermal Image-Like Representation | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 133-140 | ||
Keywords | |||||
Abstract | This paper proposes a novel model to obtain thermal image-like representations to be used as an input in any thermal image compressive sensing approach (e.g., thermal image: filtering, enhancing, super-resolution). Thermal images offer interesting information about the objects in the scene, in addition to their temperature. Unfortunately, in most of the cases thermal cameras acquire low resolution/quality images. Hence, in order to improve these images, there are several state-of-the-art approaches that exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). In these SOTA approaches visible images are fused at different levels without paying attention the images acquire information at different bands of the spectral. In this paper a novel approach is proposed to generate thermal image-like representations from a low cost visible images, by means of a contrastive cycled GAN network. Obtained representations (synthetic thermal image) can be later on used to improve the low quality thermal image of the same scene. Experimental results on different datasets are presented. | ||||
Address | Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ SuS2023b | Serial | 3927 | ||
Permanent link to this record | |||||
Author | David Dueñas; Mostafa Kamal; Petia Radeva | ||||
Title | Efficient Deep Learning Ensemble for Skin Lesion Classification | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 303-314 | ||
Keywords | |||||
Abstract | Vision Transformers (ViTs) are deep learning techniques that have been gaining in popularity in recent years.
In this work, we study the performance of ViTs and Convolutional Neural Networks (CNNs) on skin lesions classification tasks, specifically melanoma diagnosis. We show that regardless of the performance of both architectures, an ensemble of them can improve their generalization. We also present an adaptation to the Gram-OOD* method (detecting Out-of-distribution (OOD) using Gram matrices) for skin lesion images. Moreover, the integration of super-convergence was critical to success in building models with strict computing and training time constraints. We evaluated our ensemble of ViTs and CNNs, demonstrating that generalization is enhanced by placing first in the 2019 and third in the 2020 ISIC Challenge Live Leaderboards (available at https://challenge.isic-archive.com/leaderboards/live/). |
||||
Address | Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ DKR2023 | Serial | 3928 | ||
Permanent link to this record | |||||
Author | Patricia Marquez; Debora Gil; R.Mester; Aura Hernandez-Sabate | ||||
Title | Local Analysis of Confidence Measures for Optical Flow Quality Evaluation | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 450-457 | |
Keywords | Optical Flow; Confidence Measure; Performance Evaluation. | ||||
Abstract | Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance. |
||||
Address | Lisboa; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | IAM; ADAS; 600.044; 600.060; 600.057; 601.145; 600.076; 600.075 | Approved | no | ||
Call Number | Admin @ si @ MGM2014 | Serial | 2432 | ||
Permanent link to this record | |||||
Author | Q. Xue; Laura Igual; A. Berenguel; M. Guerrieri; L. Garrido | ||||
Title | Active Contour Segmentation with Affine Coordinate-Based Parametrization | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 5-14 | |
Keywords | Active Contours; Affine Coordinates; Mean Value Coordinates | ||||
Abstract | In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments. |
||||
Address | Lisboa; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | OR;MILAB | Approved | no | ||
Call Number | Admin @ si @ XIB2014 | Serial | 2452 | ||
Permanent link to this record |