|
Records |
Links |
|
Author |
Egils Avots; Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Baro; Paul Pallin; Gholamreza Anbarjafari |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
From 2D to 3D geodesic-based garment matching |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
MTAP |
|
|
Volume |
78 |
Issue |
18 |
Pages |
25829–25853 |
|
|
Keywords |
Shape matching; Geodesic distance; Texture mapping; RGBD image processing; Gaussian mixture model |
|
|
Abstract |
A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by root mean square error (RMS) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA; ISE; 600.098; 600.119; 602.133 |
Approved |
no |
|
|
Call Number |
Admin @ si @ AME2019 |
Serial |
3317 |
|
Permanent link to this record |
|
|
|
|
Author |
Dani Rowe; Jordi Gonzalez; Marco Pedersoli; Juan J. Villanueva |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
On Tracking Inside Groups |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Machine Vision and Applications |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
MVA |
|
|
Volume |
21 |
Issue |
2 |
Pages |
113–127 |
|
|
Keywords |
|
|
|
Abstract |
This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer-Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0932-8092 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ RGP2010 |
Serial |
1158 |
|
Permanent link to this record |
|
|
|
|
Author |
Francisco Javier Orozco; Xavier Roca; Jordi Gonzalez |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Real-Time Gaze Tracking with Appearance-Based Models |
Type |
Journal Article |
|
Year |
2008 |
Publication |
Machine Vision Applications |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
MVAP |
|
|
Volume |
20 |
Issue |
6 |
Pages |
353-364 |
|
|
Keywords |
Keywords Eyelid and iris tracking, Appearance models, Blinking, Iris saccade, Real-time gaze tracking |
|
|
Abstract |
Psychological evidence has emphasized the importance of eye gaze analysis in human computer interaction and emotion interpretation. To this end, current image analysis algorithms take into consideration eye-lid and iris motion detection using colour information and edge detectors. However, eye movement is fast and and hence difficult to use to obtain a precise and robust tracking. Instead, our
method proposed to describe eyelid and iris movements as continuous variables using appearance-based tracking. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques.Thus,
in the proposed method textures are learned on-line from near frontal images and illumination changes, occlusions and fast movements are managed. The method achieves real-time performance by combining two appearance-based trackers to a
backtracking algorithm for eyelid estimation and another for iris estimation. These contributions represent a significant advance towards a reliable gaze motion description for HCI and expression analysis, where the strength of complementary
methodologies are combined to avoid using high quality images, colour information, texture training, camera settings and other time-consuming processes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ ORG2008 |
Serial |
972 |
|
Permanent link to this record |
|
|
|
|
Author |
Thierry Brouard; Jordi Gonzalez; Caifeng Shan; Massimo Piccardi; Larry S. Davis |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Special issue on background modeling for foreground detection in real-world dynamic scenes |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Machine Vision and Applications |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
MVAP |
|
|
Volume |
25 |
Issue |
5 |
Pages |
1101-1103 |
|
|
Keywords |
|
|
|
Abstract |
Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called “foreground” from the remaining part of the scene called “background”, and permits different algorithmic treatment in the video processing field such as video surveillance, optical motion capture, multimedia applications, teleconferencing and human–computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background, and the foreground detection is made using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, i |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0932-8092 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.078 |
Approved |
no |
|
|
Call Number |
BGS2014a |
Serial |
2411 |
|
Permanent link to this record |
|
|
|
|
Author |
Ivan Huerta; Ariel Amato; Xavier Roca; Jordi Gonzalez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Exploiting Multiple Cues in Motion Segmentation Based on Background Subtraction |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Neurocomputing |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
NEUCOM |
|
|
Volume |
100 |
Issue |
|
Pages |
183–196 |
|
|
Keywords |
Motion segmentation; Shadow suppression; Colour segmentation; Edge segmentation; Ghost detection; Background subtraction |
|
|
Abstract |
This paper presents a novel algorithm for mobile-object segmentation from static background scenes, which is both robust and accurate under most of the common problems found in motionsegmentation. In our first contribution, a case analysis of motionsegmentation errors is presented taking into account the inaccuracies associated with different cues, namely colour, edge and intensity. Our second contribution is an hybrid architecture which copes with the main issues observed in the case analysis by fusing the knowledge from the aforementioned three cues and a temporal difference algorithm. On one hand, we enhance the colour and edge models to solve not only global and local illumination changes (i.e. shadows and highlights) but also the camouflage in intensity. In addition, local information is also exploited to solve the camouflage in chroma. On the other hand, the intensity cue is applied when colour and edge cues are not available because their values are beyond the dynamic range. Additionally, temporal difference scheme is included to segment motion where those three cues cannot be reliably computed, for example in those background regions not visible during the training period. Lastly, our approach is extended for handling ghost detection. The proposed method obtains very accurate and robust motionsegmentation results in multiple indoor and outdoor scenarios, while outperforming the most-referred state-of-art approaches. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ HAR2013 |
Serial |
1808 |
|
Permanent link to this record |