|
Hugo Jair Escalante, Victor Ponce, Sergio Escalera, Xavier Baro, Alicia Morales-Reyes, & Jose Martinez-Carranza. (2017). Evolving weighting schemes for the Bag of Visual Words. Neural Computing and Applications - Neural Computing and Applications, 28(5), 925–939.
Abstract: The Bag of Visual Words (BoVW) is an established representation in computer vision. Taking inspiration from text mining, this representation has proved
to be very effective in many domains. However, in most cases, standard term-weighting schemes are adopted (e.g.,term-frequency or TF-IDF). It remains open the question of whether alternative weighting schemes could boost the
performance of methods based on BoVW. More importantly, it is unknown whether it is possible to automatically learn and determine effective weighting schemes from
scratch. This paper brings some light into both of these unknowns. On the one hand, we report an evaluation of the most common weighting schemes used in text mining, but rarely used in computer vision tasks. Besides, we propose an evolutionary algorithm capable of automatically learning weighting schemes for computer vision problems. We report empirical results of an extensive study in several computer vision problems. Results show the usefulness of the proposed method.
Keywords: Bag of Visual Words; Bag of features; Genetic programming; Term-weighting schemes; Computer vision
|
|
|
Hugo Jair Escalante, Victor Ponce, Jun Wan, Michael A. Riegler, Baiyu Chen, Albert Clapes, et al. (2016). ChaLearn Joint Contest on Multimedia Challenges Beyond Visual Analysis: An Overview. In 23rd International Conference on Pattern Recognition.
Abstract: This paper provides an overview of the Joint Contest on Multimedia Challenges Beyond Visual Analysis. We organized an academic competition that focused on four problems that require effective processing of multimodal information in order to be solved. Two tracks were devoted to gesture spotting and recognition from RGB-D video, two fundamental problems for human computer interaction. Another track was devoted to a second round of the first impressions challenge of which the goal was to develop methods to recognize personality traits from
short video clips. For this second round we adopted a novel collaborative-competitive (i.e., coopetition) setting. The fourth track was dedicated to the problem of video recommendation for improving user experience. The challenge was open for about 45 days, and received outstanding participation: almost
200 participants registered to the contest, and 20 teams sent predictions in the final stage. The main goals of the challenge were fulfilled: the state of the art was advanced considerably in the four tracks, with novel solutions to the proposed problems (mostly relying on deep learning). However, further research is still required. The data of the four tracks will be available to
allow researchers to keep making progress in the four tracks.
|
|
|
Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu D. Tizabi, Fabian Isensee, Tim J. Adler, et al. (2023). Why Is the Winner the Best? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19955–19966).
Abstract: International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
|
|
|
Sergio Escalera, Marti Soler, Stephane Ayache, Umut Guçlu, Jun Wan, Meysam Madadi, et al. (2019). ChaLearn Looking at People: Inpainting and Denoising Challenges. In The Springer Series on Challenges in Machine Learning (pp. 23–44).
Abstract: Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence. However, in the context of computer vision, the problem has only been studied in specific scenarios (e.g., certain types of occlusions in specific types of images), although it is common to have incomplete information in visual data. This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018. The ChaLearn Looking at People Inpainting Challenge aimed at advancing the state of the art on visual inpainting by promoting the development of methods for recovering missing and occluded information from images and video. Three tracks were proposed in which visual inpainting might be helpful but still challenging: human body pose estimation, text overlays removal and fingerprint denoising. This chapter describes the design of the challenge, which includes the release of three novel datasets, and the description of evaluation metrics, baselines and evaluation protocol. The results of the challenge are analyzed and discussed in detail and conclusions derived from this event are outlined.
|
|
|
Sergio Escalera. (2008). Coding and Decoding Design of ECOCs for Multi-class Pattern and Object Recognition A (Petia Radeva, & Oriol Pujol, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Many real problems require multi-class decisions. In the Pattern Recognition field,
many techniques have been proposed to deal with the binary problem. However,
the extension of many 2-class classifiers to the multi-class case is a hard task. In
this sense, Error-Correcting Output Codes (ECOC) demonstrated to be a powerful
tool to combine any number of binary classifiers to model multi-class problems. But
there are still many open issues about the capabilities of the ECOC framework. In
this thesis, the two main stages of an ECOC design are analyzed: the coding and
the decoding steps. We present different problem-dependent designs. These designs
take advantage of the knowledge of the problem domain to minimize the number
of classifiers, obtaining a high classification performance. On the other hand, we
analyze the ECOC codification in order to define new decoding rules that take full
benefit from the information provided at the coding step. Moreover, as a successful
classification requires a rich feature set, new feature detection/extraction techniques
are presented and evaluated on the new ECOC designs. The evaluation of the new
methodology is performed on different real and synthetic data sets: UCI Machine
Learning Repository, handwriting symbols, traffic signs from a Mobile Mapping System, Intravascular Ultrasound images, Caltech Repository data set or Chaga’s disease
data set. The results of this thesis show that significant performance improvements
are obtained on both traditional coding and decoding ECOC designs when the new
coding and decoding rules are taken into account.
|
|
|
Sergio Escalera. (2012). Human Behavior Analysis From Depth Maps. In F.J. Perales, R.B. Fisher, & T.B. Moeslund (Eds.), 7th Conference on Articulated Motion and Deformable Objects (Vol. 7378, pp. 282–292). Springer Heidelberg.
Abstract: Pose Recovery (PR) and Human Behavior Analysis (HBA) have been a main focus of interest from the beginnings of Computer Vision and Machine Learning. PR and HBA were originally addressed by the analysis of still images and image sequences. More recent strategies consisted of Motion Capture technology (MOCAP), based on the synchronization of multiple cameras in controlled environments; and the analysis of depth maps from Time-of-Flight (ToF) technology, based on range image recording from distance sensor measurements. Recently, with the appearance of the multi-modal RGBD information provided by the low cost Kinect \textsfTM sensor (from RGB and Depth, respectively), classical methods for PR and HBA have been redefined, and new strategies have been proposed. In this paper, the recent contributions and future trends of multi-modal RGBD data analysis for PR and HBA are reviewed and discussed.
|
|
|
Sergio Escalera. (2013). Multi-Modal Human Behaviour Analysis from Visual Data Sources. ERCIM - ERCIM News journal, 21–22.
Abstract: The Human Pose Recovery and Behaviour Analysis group (HuPBA), University of Barcelona, is developing a line of research on multi-modal analysis of humans in visual data. The novel technology is being applied in several scenarios with high social impact, including sign language recognition, assisted technology and supported diagnosis for the elderly and people with mental/physical disabilities, fitness conditioning, and Human Computer Interaction.
|
|
|
Antonio Esteban Lansaque. (2014). 3D reconstruction and recognition using structured ligth (Vol. 179). Master's thesis, , .
Abstract: This work covers the problem of 3D reconstruction, recognition and 6DOF pose estimation. The goal of this project is to reconstruct a 3D scene and to align an object model of the industrial pieces onto the reconstructed scene. The reconstruction algorithm is based on stereo techniques and the recognition algorithm is based on SHOT descriptors computed on a set of uniform keypoints. Correspondences are used to estimate a first 6DOF transformation that maps the model onto the scene and then ICP algorithm is used to refine the transformation. In order to check the effectiveness of the proposed algorithm, several experiments were performed. These experiments were conducted on a lab environment in order to get results under the same conditions in all of them. Although obtained results are not real time results, the proposed algorithm ends up with high rates of object recognition.
|
|
|
Antonio Esteban Lansaque. (2019). An Endoscopic Navigation System for Lung Cancer Biopsy (Debora Gil, & Carles Sanchez, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Lung cancer is one of the most diagnosed cancers among men and women. Actually,
lung cancer accounts for 13% of the total cases with a 5-year global survival
rate in patients. Although Early detection increases survival rate from 38% to 67%, accurate diagnosis remains a challenge. Pathological confirmation requires extracting a sample of the lesion tissue for its biopsy. The preferred procedure for tissue biopsy is called bronchoscopy. A bronchoscopy is an endoscopic technique for the internal exploration of airways which facilitates the performance of minimal invasive interventions with low risk for the patient. Recent advances in bronchoscopic devices have increased their use for minimal invasive diagnostic and intervention procedures, like lung cancer biopsy sampling. Despite the improvement in bronchoscopic device quality, there is a lack of intelligent computational systems for supporting in-vivo clinical decision during examinations. Existing technologies fail to accurately reach the lesion due to several aspects at intervention off-line planning and poor intra-operative guidance at exploration time. Existing guiding systems radiate patients and clinical staff,might be expensive and achieve a suboptimlal 70% of yield boost. Diagnostic yield could be improved reducing radiation and costs by developing intra-operative support systems able to guide the bronchoscopist to the lesion during the intervention. The goal of this PhD thesis is to develop an image-based navigation systemfor intra-operative guidance of bronchoscopists to a target lesion across a path previously planned on a CT-scan. We propose a 3D navigation system which uses the anatomy of video bronchoscopy frames to locate the bronchoscope within the airways. Once the bronchoscope is located, our navigation system is able to indicate the bifurcation which needs to be followed to reach the lesion. In order to facilitate an off-line validation
as realistic as possible, we also present a method for augmenting simulated virtual bronchoscopies with the appearance of intra-operative videos. Experiments performed on augmented and intra-operative videos, prove that our algorithm can be speeded up for an on-line implementation in the operating room.
|
|
|
Sergio Escalera, David M.J. Tax, Oriol Pujol, Petia Radeva, & Robert P.W. Duin. (2011). Multi-Class Classification in Image Analysis Via Error-Correcting Output Codes. In H. Kawasnicka, & L.Jain (Eds.), Innovations in Intelligent Image Analysis (Vol. 339, pp. 7–29). Berlin: Springer Berlin Heidelberg.
Abstract: A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem.A classification decision is obtained by assigning the label of the class with the closest code. In this paper, we overview the state-of-the-art on ECOC designs and test them in real applications. Results on different multi-class data sets show the benefits of using the ensemble of classifiers when categorizing objects in images.
|
|
|
Sergio Escalera, Markus Weimer, Mikhail Burtsev, Valentin Malykh, Varvara Logacheva, Ryan Lowe, et al. (2018). Introduction to NIPS 2017 Competition Track. In Sergio Escalera, & Markus Weimer (Eds.), The NIPS ’17 Competition: Building Intelligent Systems (pp. 1–23). Springer.
Abstract: Competitions have become a popular tool in the data science community to solve hard problems, assess the state of the art and spur new research directions. Companies like Kaggle and open source platforms like Codalab connect people with data and a data science problem to those with the skills and means to solve it. Hence, the question arises: What, if anything, could NIPS add to this rich ecosystem?
In 2017, we embarked to find out. We attracted 23 potential competitions, of which we selected five to be NIPS 2017 competitions. Our final selection features competitions advancing the state of the art in other sciences such as “Classifying Clinically Actionable Genetic Mutations” and “Learning to Run”. Others, like “The Conversational Intelligence Challenge” and “Adversarial Attacks and Defences” generated new data sets that we expect to impact the progress in their respective communities for years to come. And “Human-Computer Question Answering Competition” showed us just how far we as a field have come in ability and efficiency since the break-through performance of Watson in Jeopardy. Two additional competitions, DeepArt and AI XPRIZE Milestions, were also associated to the NIPS 2017 competition track, whose results are also presented within this chapter.
|
|
|
Ester Fornells, Manuel De Armas, Maria Teresa Anguera, Sergio Escalera, Marcos Antonio Catalán, & Josep Moya. (2018). Desarrollo del proyecto del Consell Comarcal del Baix Llobregat “Buen Trato a las personas mayores y aquellas en situación de fragilidad con sufrimiento emocional: Hacia un envejecimiento saludable”. Informaciones Psiquiatricas, 47–59.
|
|
|
David Fernandez, Jon Almazan, Nuria Cirera, Alicia Fornes, & Josep Llados. (2014). BH2M: the Barcelona Historical Handwritten Marriages database. In 22nd International Conference on Pattern Recognition (pp. 256–261).
Abstract: This paper presents an image database of historical handwritten marriages records stored in the archives of Barcelona cathedral, and the corresponding meta-data addressed to evaluate the performance of document analysis algorithms. The contribution of this paper is twofold. First, it presents a complete ground truth which covers the whole pipeline of handwriting
recognition research, from layout analysis to recognition and understanding. Second, it is the first dataset in the emerging area of genealogical document analysis, where documents are manuscripts pseudo-structured with specific lexicons and the interest is beyond pure transcriptions but context dependent.
|
|
|
J. Filipe, Juan Andrade, & J.L. Ferrier. (2005). FAF 2005.
|
|
|
Wenwen Fu, Zhihong An, Wendong Huang, Haoran Sun, Wenjuan Gong, & Jordi Gonzalez. (2023). A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection. ELEC - Electronics, 12(18), 3947.
Abstract: Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores.
Keywords: micro-expression spotting; sliding window; key frame extraction
|
|