toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (down) Zhong Jin; Zhen Lou; Jing-Yu Yang; Quan-sen Sun edit  openurl
  Title Face detection using template matching and skin color information Type Miscellaneous
  Year 2005 Publication International Conference on Intelligent Computing, 636–645 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Hefei (China)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JLY2005 Serial 627  
Permanent link to this record
 

 
Author (down) Zhong Jin; Zhen Lou; Jing-Yu Yang; Quan-sen Sun edit  openurl
  Title Face Detection using Template Matching and Skin-color Information Type Journal
  Year 2007 Publication Neurocomputing, 70(4–6): 794–800 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JLY2007 Serial 878  
Permanent link to this record
 

 
Author (down) Zhong Jin; Jing-Yu Yang; Zhen Lou edit  openurl
  Title A luminance-conditional distribution model of skin color information Type Miscellaneous
  Year 2005 Publication 2005 Beijing International Conference on Imaging: Technology and Applications for the 21th Century, 280–281 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Beijing (China)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JYL2005 Serial 628  
Permanent link to this record
 

 
Author (down) Zhong Jin; Franck Davoine; Zhen Lou; Jing-Yu Yang edit  openurl
  Title A novel PCA-based Bayes classifier and face analysis Type Book Chapter
  Year 2006 Publication International Conference on Advances in Biometrics (ICB’06), LNCS 3832: 144–150 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Hong Kong  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JDL2006 Serial 624  
Permanent link to this record
 

 
Author (down) Zhong Jin; Franck Davoine; Zhen Lou edit  openurl
  Title Facial expression analysis by using KPCA Type Miscellaneous
  Year 2003 Publication IEEE International Conference on Robotics, Intelligent Systems and Signal Processing (IEEE RISSP 2003), pp736–741 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Changsha, Hunan, China  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JDL2003 Serial 431  
Permanent link to this record
 

 
Author (down) Zhong Jin; Franck Davoine; Zhen Lou edit  openurl
  Title An Effective EM Algorithm for PCA Mixture Model Type Miscellaneous
  Year 2004 Publication Structural and Statistical Pattern Recognition, Lecture Notes in Computer Science, 3138:626–634 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Lisbon, Portugal  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JDL2004 Serial 482  
Permanent link to this record
 

 
Author (down) Zhong Jin; Franck Davoine edit  openurl
  Title Orthogonal ICA Representation Of Images Type Miscellaneous
  Year 2004 Publication 8th International Conference on Control, Automation, Robotics and Vision, 369–374 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Kunming (China)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ JiD2004 Serial 499  
Permanent link to this record
 

 
Author (down) Zhijie Fang; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title On-Board Detection of Pedestrian Intentions Type Journal Article
  Year 2017 Publication Sensors Abbreviated Journal SENS  
  Volume 17 Issue 10 Pages 2193  
  Keywords pedestrian intention; ADAS; self-driving  
  Abstract Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role.
During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors.
However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is
essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the
pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.076; 601.223; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ FVL2017 Serial 2983  
Permanent link to this record
 

 
Author (down) Zhijie Fang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Is the Pedestrian going to Cross? Answering by 2D Pose Estimation Type Conference Article
  Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 1271 - 1276  
  Keywords  
  Abstract Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IV  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ FaL2018 Serial 3181  
Permanent link to this record
 

 
Author (down) Zhijie Fang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Intention Recognition of Pedestrians and Cyclists by 2D Pose Estimation Type Journal Article
  Year 2019 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 21 Issue 11 Pages 4773 - 4783  
  Keywords  
  Abstract Anticipating the intentions of vulnerable road users (VRUs) such as pedestrians and cyclists is critical for performing safe and comfortable driving maneuvers. This is the case for human driving and, thus, should be taken into account by systems providing any level of driving assistance, from advanced driver assistant systems (ADAS) to fully autonomous vehicles (AVs). In this paper, we show how the latest advances on monocular vision-based human pose estimation, i.e. those relying on deep Convolutional Neural Networks (CNNs), enable to recognize the intentions of such VRUs. In the case of cyclists, we assume that they follow traffic rules to indicate future maneuvers with arm signals. In the case of pedestrians, no indications can be assumed. Instead, we hypothesize that the walking pattern of a pedestrian allows to determine if he/she has the intention of crossing the road in the path of the ego-vehicle, so that the ego-vehicle must maneuver accordingly (e.g. slowing down or stopping). In this paper, we show how the same methodology can be used for recognizing pedestrians and cyclists' intentions. For pedestrians, we perform experiments on the JAAD dataset. For cyclists, we did not found an analogous dataset, thus, we created our own one by acquiring and annotating videos which we share with the research community. Overall, the proposed pipeline provides new state-of-the-art results on the intention recognition of VRUs.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ FaL2019 Serial 3305  
Permanent link to this record
 

 
Author (down) Zhijie Fang edit  isbn
openurl 
  Title Behavior understanding of vulnerable road users by 2D pose estimation Type Book Whole
  Year 2019 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Anticipating the intentions of vulnerable road users (VRUs) such as pedestrians
and cyclists can be critical for performing safe and comfortable driving maneuvers. This is the case for human driving and, therefore, should be taken into account by systems providing any level of driving assistance, i.e. from advanced driver assistant systems (ADAS) to fully autonomous vehicles (AVs). In this PhD work, we show how the latest advances on monocular vision-based human pose estimation, i.e. those relying on deep Convolutional Neural Networks (CNNs), enable to recognize the intentions of such VRUs. In the case of cyclists, we assume that they follow the established traffic codes to indicate future left/right turns and stop maneuvers with arm signals. In the case of pedestrians, no indications can be assumed a priori. Instead, we hypothesize that the walking pattern of a pedestrian can allow us to determine if he/she has the intention of crossing the road in the path of the egovehicle, so that the ego-vehicle must maneuver accordingly (e.g. slowing down or stopping). In this PhD work, we show how the same methodology can be used for recognizing pedestrians and cyclists’ intentions. For pedestrians, we perform experiments on the publicly available Daimler and JAAD datasets. For cyclists, we did not found an analogous dataset, therefore, we created our own one by acquiring
and annotating corresponding video-sequences which we aim to share with the
research community. Overall, the proposed pipeline provides new state-of-the-art results on the intention recognition of VRUs.
 
  Address May 2019  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;David Vazquez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-948531-6-6 Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ Fan2019 Serial 3388  
Permanent link to this record
 

 
Author (down) Zhengying Liu; Zhen Xu; Shangeth Rajaa; Meysam Madadi; Julio C. S. Jacques Junior; Sergio Escalera; Adrien Pavao; Sebastien Treguer; Wei-Wei Tu; Isabelle Guyon edit   pdf
url  openurl
  Title Towards Automated Deep Learning: Analysis of the AutoDL challenge series 2019 Type Conference Article
  Year 2020 Publication Proceedings of Machine Learning Research Abbreviated Journal  
  Volume 123 Issue Pages 242-252  
  Keywords  
  Abstract We present the design and results of recent competitions in Automated Deep Learning (AutoDL). In the AutoDL challenge series 2019, we organized 5 machine learning challenges: AutoCV, AutoCV2, AutoNLP, AutoSpeech and AutoDL. The first 4 challenges concern each a specific application domain, such as computer vision, natural language processing and speech recognition. At the time of March 2020, the last challenge AutoDL is still on-going and we only present its design. Some highlights of this work include: (1) a benchmark suite of baseline AutoML solutions, with emphasis on domains for which Deep Learning methods have had prior success (image, video, text, speech, etc); (2) a novel any-time learning framework, which opens doors for further theoretical consideration; (3) a repository of around 100 datasets (from all above domains) over half of which are released as public datasets to enable research on meta-learning; (4) analyses revealing that winning solutions generalize to new unseen datasets, validating progress towards universal AutoML solution; (5) open-sourcing of the challenge platform, the starting kit, the dataset formatting toolkit, and all winning solutions (All information available at {autodl.chalearn.org}).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPS  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ LXR2020 Serial 3500  
Permanent link to this record
 

 
Author (down) Zhengying Liu; Zhen Xu; Sergio Escalera; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Adrien Pavao; Sebastien Treguer; Wei-Wei Tu edit   pdf
url  openurl
  Title Towards automated computer vision: analysis of the AutoCV challenges 2019 Type Journal Article
  Year 2020 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 135 Issue Pages 196-203  
  Keywords Computer vision; AutoML; Deep learning  
  Abstract We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on any-time performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified in several aspects: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners’ performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the any-time metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be made publicly available, thus providing a collection of uniformly formatted datasets, which can serve to conduct further research, particularly on meta-learning.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ LXE2020 Serial 3427  
Permanent link to this record
 

 
Author (down) Zhengying Liu; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera; Adrien Pavao; Hugo Jair Escalante; Wei-Wei Tu; Zhen Xu; Sebastien Treguer edit   pdf
url  openurl
  Title AutoCV Challenge Design and Baseline Results Type Conference Article
  Year 2019 Publication La Conference sur l’Apprentissage Automatique Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.  
  Address Toulouse; Francia; July 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ LGJ2019 Serial 3323  
Permanent link to this record
 

 
Author (down) Zhengying Liu; Adrien Pavao; Zhen Xu; Sergio Escalera; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sebastien Treguer edit   pdf
openurl 
  Title How far are we from true AutoML: reflection from winning solutions and results of AutoDL challenge Type Conference Article
  Year 2020 Publication 7th ICML Workshop on Automated Machine Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Following the completion of the AutoDL challenge (the final challenge in the ChaLearn
AutoDL challenge series 2019), we investigate winning solutions and challenge results to
answer an important motivational question: how far are we from achieving true AutoML?
On one hand, the winning solutions achieve good (accurate and fast) classification performance on unseen datasets. On the other hand, all winning solutions still contain a
considerable amount of hard-coded knowledge on the domain (or modality) such as image,
video, text, speech and tabular. This form of ad-hoc meta-learning could be replaced by
more automated forms of meta-learning in the future. Organizing a meta-learning challenge could help forging AutoML solutions that generalize to new unseen domains (e.g.
new types of sensor data) as well as gaining insights on the AutoML problem from a more
fundamental point of view. The datasets of the AutoDL challenge are a resource that can
be used for further benchmarks and the code of the winners has been outsourced, which is
a big step towards “democratizing” Deep Learning.
 
  Address Virtual; July 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICML  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ LPX2020 Serial 3502  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: