Home | [151–160] << 161 162 163 164 165 166 167 168 169 170 >> [171–180] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Julio C. S. Jacques Junior; Cagri Ozcinar; Marina Marjanovic; Xavier Baro; Gholamreza Anbarjafari; Sergio Escalera | ||||
Title | On the effect of age perception biases for real age regression | Type | Conference Article | ||
Year | 2019 | Publication | 14th IEEE International Conference on Automatic Face and Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer. | ||||
Address ![]() |
Lille; France; May 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FG | ||
Notes | HuPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ JOM2019 | Serial | 3262 | ||
Permanent link to this record | |||||
Author | Daniel Sanchez; Meysam Madadi; Marc Oliu; Sergio Escalera | ||||
Title | Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation | Type | Conference Article | ||
Year | 2019 | Publication | 14th IEEE International Conference on Automatic Face and Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles. | ||||
Address ![]() |
Lille; France; May 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FG | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ SMO2019 | Serial | 3326 | ||
Permanent link to this record | |||||
Author | Juan A. Carvajal Ayala; Dennis Romero; Angel Sappa | ||||
Title | Fine-tuning based deep convolutional networks for lepidopterous genus recognition | Type | Conference Article | ||
Year | 2016 | Publication | 21st Ibero American Congress on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 467-475 | ||
Keywords | |||||
Abstract | This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%. | ||||
Address ![]() |
Lima; Perú; November 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CIARP | ||
Notes | ADAS; 600.086 | Approved | no | ||
Call Number | Admin @ si @ CRS2016 | Serial | 2913 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras | ||||
Title | The IIIA30 MObile Robot Object Recognition Datset | Type | Conference Article | ||
Year | 2011 | Publication | 11th Portuguese Robotics Open | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones. | ||||
Address ![]() |
Lisboa | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | Robotica | ||
Notes | RV;ADAS | Approved | no | ||
Call Number | Admin @ si @ RAV2011 | Serial | 1777 | ||
Permanent link to this record | |||||
Author | Agnes Borras; Josep Llados | ||||
Title | Corest: A measure of color and space stability to detect salient regions according to human criteria | Type | Conference Article | ||
Year | 2009 | Publication | 5th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 204-209 | ||
Keywords | |||||
Abstract | |||||
Address ![]() |
Lisboa, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8111-69-2 | Medium | ||
Area | Expedition | Conference | VISAPP | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ BoL2009 | Serial | 1225 | ||
Permanent link to this record | |||||
Author | Partha Pratim Roy; Josep Llados; Umapada Pal | ||||
Title | A Complete System for Detection and Recognition of Text in Graphical Documents using Background Information | Type | Conference Article | ||
Year | 2009 | Publication | 5th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address ![]() |
Lisboa, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8111-69-2 | Medium | ||
Area | Expedition | Conference | VISAPP | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ RLP2009 | Serial | 1238 | ||
Permanent link to this record | |||||
Author | Danna Xue; Fei Yang; Pei Wang; Luis Herranz; Jinqiu Sun; Yu Zhu; Yanning Zhang | ||||
Title | SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision | Type | Conference Article | ||
Year | 2022 | Publication | 30th ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 6539-6548 | ||
Keywords | |||||
Abstract | Accurate semantic segmentation models typically require significant computational resources, inhibiting their use in practical applications. Recent works rely on well-crafted lightweight models to achieve fast inference. However, these models cannot flexibly adapt to varying accuracy and efficiency requirements. In this paper, we propose a simple but effective slimmable semantic segmentation (SlimSeg) method, which can be executed at different capacities during inference depending on the desired accuracy-efficiency tradeoff. More specifically, we employ parametrized channel slimming by stepwise downward knowledge distillation during training. Motivated by the observation that the differences between segmentation results of each submodel are mainly near the semantic borders, we introduce an additional boundary guided semantic segmentation loss to further improve the performance of each submodel. We show that our proposed SlimSeg with various mainstream networks can produce flexible models that provide dynamic adjustment of computational cost and better performance than independent models. Extensive experiments on semantic segmentation benchmarks, Cityscapes and CamVid, demonstrate the generalization ability of our framework. | ||||
Address ![]() |
Lisboa, Portugal, October 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Association for Computing Machinery | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-9203-7 | Medium | ||
Area | Expedition | Conference | MM | ||
Notes | MACO; 600.161; 601.400 | Approved | no | ||
Call Number | Admin @ si @ XYW2022 | Serial | 3758 | ||
Permanent link to this record | |||||
Author | Patricia Marquez; Debora Gil; R.Mester; Aura Hernandez-Sabate | ||||
Title | Local Analysis of Confidence Measures for Optical Flow Quality Evaluation | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 450-457 | |
Keywords | Optical Flow; Confidence Measure; Performance Evaluation. | ||||
Abstract | Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance. |
||||
Address ![]() |
Lisboa; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | IAM; ADAS; 600.044; 600.060; 600.057; 601.145; 600.076; 600.075 | Approved | no | ||
Call Number | Admin @ si @ MGM2014 | Serial | 2432 | ||
Permanent link to this record | |||||
Author | Q. Xue; Laura Igual; A. Berenguel; M. Guerrieri; L. Garrido | ||||
Title | Active Contour Segmentation with Affine Coordinate-Based Parametrization | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 5-14 | |
Keywords | Active Contours; Affine Coordinates; Mean Value Coordinates | ||||
Abstract | In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments. |
||||
Address ![]() |
Lisboa; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | OR;MILAB | Approved | no | ||
Call Number | Admin @ si @ XIB2014 | Serial | 2452 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Angel Sappa | ||||
Title | Toward a Thermal Image-Like Representation | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 133-140 | ||
Keywords | |||||
Abstract | This paper proposes a novel model to obtain thermal image-like representations to be used as an input in any thermal image compressive sensing approach (e.g., thermal image: filtering, enhancing, super-resolution). Thermal images offer interesting information about the objects in the scene, in addition to their temperature. Unfortunately, in most of the cases thermal cameras acquire low resolution/quality images. Hence, in order to improve these images, there are several state-of-the-art approaches that exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). In these SOTA approaches visible images are fused at different levels without paying attention the images acquire information at different bands of the spectral. In this paper a novel approach is proposed to generate thermal image-like representations from a low cost visible images, by means of a contrastive cycled GAN network. Obtained representations (synthetic thermal image) can be later on used to improve the low quality thermal image of the same scene. Experimental results on different datasets are presented. | ||||
Address ![]() |
Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ SuS2023b | Serial | 3927 | ||
Permanent link to this record | |||||
Author | David Dueñas; Mostafa Kamal; Petia Radeva | ||||
Title | Efficient Deep Learning Ensemble for Skin Lesion Classification | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 303-314 | ||
Keywords | |||||
Abstract | Vision Transformers (ViTs) are deep learning techniques that have been gaining in popularity in recent years.
In this work, we study the performance of ViTs and Convolutional Neural Networks (CNNs) on skin lesions classification tasks, specifically melanoma diagnosis. We show that regardless of the performance of both architectures, an ensemble of them can improve their generalization. We also present an adaptation to the Gram-OOD* method (detecting Out-of-distribution (OOD) using Gram matrices) for skin lesion images. Moreover, the integration of super-convergence was critical to success in building models with strict computing and training time constraints. We evaluated our ensemble of ViTs and CNNs, demonstrating that generalization is enhanced by placing first in the 2019 and third in the 2020 ISIC Challenge Live Leaderboards (available at https://challenge.isic-archive.com/leaderboards/live/). |
||||
Address ![]() |
Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ DKR2023 | Serial | 3928 | ||
Permanent link to this record | |||||
Author | P. Ricaurte; C. Chilan; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa | ||||
Title | Performance Evaluation of Feature Point Descriptors in the Infrared Domain | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 1 | Issue | Pages | 545-550 | |
Keywords | Infrared Imaging; Feature Point Descriptors | ||||
Abstract | This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered. | ||||
Address ![]() |
Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ RCA2014b | Serial | 2476 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa | ||||
Title | Cross-spectral Stereo Correspondence using Dense Flow Fields | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 613-617 | |
Keywords | Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum | ||||
Abstract | This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach. | ||||
Address ![]() |
Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OAV2014 | Serial | 2477 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Felipe Lumbreras; Angel Sappa | ||||
Title | A General-purpose Crowdsourcing Platform for Mobile Devices | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 211-215 | |
Keywords | Crowdsourcing Platform; Mobile Crowdsourcing | ||||
Abstract | This paper presents details of a general purpose micro-task on-demand platform based on the crowdsourcing philosophy. This platform was specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquity and iii) embedded sensors. The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks. Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and tasksolver). Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way. Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications. Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform. | ||||
Address ![]() |
Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ISE; ADAS; 600.054; 600.055; 600.076; 600.078 | Approved | no | ||
Call Number | Admin @ si @ ALS2014 | Serial | 2478 | ||
Permanent link to this record | |||||
Author | Jose A. Garcia; David Masip; Valerio Sbragaglia; Jacopo Aguzzi | ||||
Title | Using ORB, BoW and SVM to identificate and track tagged Norway lobster Nephrops Norvegicus (L.) | Type | Conference Article | ||
Year | 2016 | Publication | 3rd International Conference on Maritime Technology and Engineering | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Sustainable capture policies of many species strongly depend on the understanding of their social behaviour. Nevertheless, the analysis of emergent behaviour in marine species poses several challenges. Usually animals are captured and observed in tanks, and their behaviour is inferred from their dynamics and interactions. Therefore, researchers must deal with thousands of hours of video data. Without loss of generality, this paper proposes a computer
vision approach to identify and track specific species, the Norway lobster, Nephrops norvegicus. We propose an identification scheme were animals are marked using black and white tags with a geometric shape in the center (holed triangle, filled triangle, holed circle and filled circle). Using a massive labelled dataset; we extract local features based on the ORB descriptor. These features are a posteriori clustered, and we construct a Bag of Visual Words feature vector per animal. This approximation yields us invariance to rotation and translation. A SVM classifier achieves generalization results above 99%. In a second contribution, we will make the code and training data publically available. |
||||
Address ![]() |
Lisboa; Portugal; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MARTECH | ||
Notes | OR;MV; | Approved | no | ||
Call Number | Admin @ si @ GMS2016b | Serial | 2817 | ||
Permanent link to this record |