toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Aura Hernandez-Sabate; Lluis Albarracin; F. Javier Sanchez edit  doi
openurl 
  Title Graph-Based Problem Explorer: A Software Tool to Support Algorithm Design Learning While Solving the Salesperson Problem Type Journal
  Year 2020 Publication Mathematics Abbreviated Journal MATH  
  Volume (up) 20 Issue 8(9) Pages 1595  
  Keywords STEM education; Project-based learning; Coding; software tool  
  Abstract In this article, we present a sequence of activities in the form of a project in order to promote
learning on design and analysis of algorithms. The project is based on the resolution of a real problem, the salesperson problem, and it is theoretically grounded on the fundamentals of mathematical modelling. In order to support the students’ work, a multimedia tool, called Graph-based Problem Explorer (GbPExplorer), has been designed and refined to promote the development of computer literacy in engineering and science university students. This tool incorporates several modules to allow coding different algorithmic techniques solving the salesman problem. Based on an educational design research along five years, we observe that working with GbPExplorer during the project provides students with the possibility of representing the situation to be studied in the form of graphs and analyze them from a computational point of view.
 
  Address September 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; ISE Approved no  
  Call Number Admin @ si @ Serial 3722  
Permanent link to this record
 

 
Author Wenjuan Gong; Yue Zhang; Wei Wang; Peng Cheng; Jordi Gonzalez edit  url
openurl 
  Title Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition Type Journal Article
  Year 2023 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal TMCCA  
  Volume (up) 20 Issue 2 Pages 1–20  
  Keywords  
  Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ GZW2023 Serial 3862  
Permanent link to this record
 

 
Author Dani Rowe; Jordi Gonzalez; Marco Pedersoli; Juan J. Villanueva edit   pdf
doi  openurl
  Title On Tracking Inside Groups Type Journal Article
  Year 2010 Publication Machine Vision and Applications Abbreviated Journal MVA  
  Volume (up) 21 Issue 2 Pages 113–127  
  Keywords  
  Abstract This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0932-8092 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number ISE @ ise @ RGP2010 Serial 1158  
Permanent link to this record
 

 
Author J. Stöttinger; A. Hanbury; N. Sebe; Theo Gevers edit  doi
openurl 
  Title Spars Color Interest Points for Image Retrieval and Object Categorization Type Journal Article
  Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume (up) 21 Issue 5 Pages 2681-2692  
  Keywords  
  Abstract Impact factor 2010: 2.92
IF 2011/2012?: 3.32
Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest points are based on luminance, and color has been largely ignored. However, the use of color increases the distinctiveness of interest points. The use of color may therefore provide selective search reducing the total number of interest points used for image matching. This paper proposes color interest points for sparse image representation. To reduce the sensitivity to varying imaging conditions, light-invariant interest points are introduced. Color statistics based on occurrence probability lead to color boosted points, which are obtained through saliency-based feature selection. Furthermore, a principal component analysis-based scale selection method is proposed, which gives a robust scale estimation per interest point. From large-scale experiments, it is shown that the proposed color interest point detector has higher repeatability than a luminance-based one. Furthermore, in the context of image retrieval, a reduced and predictable number of color features show an increase in performance compared to state-of-the-art interest points. Finally, in the context of object recognition, for the Pascal VOC 2007 challenge, our method gives comparable performance to state-of-the-art methods using only a small fraction of the features, reducing the computing time considerably.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ SHS2012 Serial 1847  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
  Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume (up) 21 Issue 2 Pages 802-815  
  Keywords  
  Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012b Serial 1851  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: