Meritxell Vinyals, Arnau Ramisa, & Ricardo Toledo. (2007). An Evaluation of an Object Recognition Schema using Multiple Region Detectors. In Artificial Intelligence Research and Development, 163:213–222, ISBN: 978–1–58603–798–7, Proceedings of the 10th International Conference of the ACIA (CCIA’07).
|
Fernando Vilariño, Dimosthenis Karatzas, Marcos Catalan, & Alberto Valcarcel. (2015). An horizon for the Public Library as a place for innovation and creativity. The Library Living Lab in Volpelleres. In The White Book on Public Library Network from Diputació de Barcelona.
|
Carles Sanchez, F. Javier Sanchez, Antoni Rosell, & Debora Gil. (2012). An illumination model of the trachea appearance in videobronchoscopy images. In Image Analysis and Recognition (Vol. 7325, pp. 313–320). LNCS. Springer Berlin Heidelberg.
Abstract: Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways. This imaging modality provides realistic images and allows non-invasive minimal intervention procedures. Tracheal procedures are routinary interventions that require assessment of the percentage of obstructed pathway for injury (stenosis) detection. Visual assessment in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error.
This paper introduces an automatic method for the estimation of steneosed trachea percentage reduction in videobronchoscopic images. We look for tracheal rings , whose deformation determines the degree of obstruction. For ring extraction , we present a ring detector based on an illumination and appearance model. This model allows us to parametrise the ring detection. Finally, we can infer optimal estimation parameters for any video resolution.
Keywords: Bronchoscopy, tracheal ring, stenosis assesment, trachea appearance model, segmentation
|
Salvatore Tabbone, & Oriol Ramos Terrades. (2014). An Overview of Symbol Recognition. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. D, pp. 523–551). Springer London.
Abstract: According to the Cambridge Dictionaries Online, a symbol is a sign, shape, or object that is used to represent something else. Symbol recognition is a subfield of general pattern recognition problems that focuses on identifying, detecting, and recognizing symbols in technical drawings, maps, or miscellaneous documents such as logos and musical scores. This chapter aims at providing the reader an overview of the different existing ways of describing and recognizing symbols and how the field has evolved to attain a certain degree of maturity.
Keywords: Pattern recognition; Shape descriptors; Structural descriptors; Symbolrecognition; Symbol spotting
|
Fadi Dornaika, & Bogdan Raducanu. (2012). Analysis and Recognition of Facial Expressions in Videos Using Facial Shape Deformation. In S.E. Carter (Ed.), Facial Expressions: Dynamic Patterns, Impairments and Social Perceptions (pp. 157–178). NOVA Publishers.
|
Alicia Fornes, & Gemma Sanchez. (2014). Analysis and Recognition of Music Scores. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. E, pp. 749–774). Springer London.
Abstract: The analysis and recognition of music scores has attracted the interest of researchers for decades. Optical Music Recognition (OMR) is a classical research field of Document Image Analysis and Recognition (DIAR), whose aim is to extract information from music scores. Music scores contain both graphical and textual information, and for this reason, techniques are closely related to graphics recognition and text recognition. Since music scores use a particular diagrammatic notation that follow the rules of music theory, many approaches make use of context information to guide the recognition and solve ambiguities. This chapter overviews the main Optical Music Recognition (OMR) approaches. Firstly, the different methods are grouped according to the OMR stages, namely, staff removal, music symbol recognition, and syntactical analysis. Secondly, specific approaches for old and handwritten music scores are reviewed. Finally, online approaches and commercial systems are also commented.
|
Isabelle Guyon, Lisheng Sun Hosoya, Marc Boulle, Hugo Jair Escalante, Sergio Escalera, Zhengying Liu, et al. (2019). Analysis of the AutoML Challenge Series 2015-2018. In Automated Machine Learning (pp. 177–219). SSCML. Springer.
Abstract: The ChaLearn AutoML Challenge (The authors are in alphabetical order of last name, except the first author who did most of the writing and the second author who produced most of the numerical analyses and plots.) (NIPS 2015 – ICML 2016) consisted of six rounds of a machine learning competition of progressive difficulty, subject to limited computational resources. It was followed bya one-round AutoML challenge (PAKDD 2018). The AutoML setting differs from former model selection/hyper-parameter selection challenges, such as the one we previously organized for NIPS 2006: the participants aim to develop fully automated and computationally efficient systems, capable of being trained and tested without human intervention, with code submission. This chapter analyzes the results of these competitions and provides details about the datasets, which were not revealed to the participants. The solutions of the winners are systematically benchmarked over all datasets of all rounds and compared with canonical machine learning algorithms available in scikit-learn. All materials discussed in this chapter (data and code) have been made publicly available at http://automl.chalearn.org/.
|
Panagiota Spyridonos, Fernando Vilariño, Jordi Vitria, Fernando Azpiroz, & Petia Radeva. (2006). Anisotropic Feature Extraction from Endoluminal Images for Detection of Intestinal Contractions. In and J. Sporring M. N. R. Larsen (Ed.), 9th International Conference on Medical Image Computing and Computer–Assisted Intervention (Vol. 4191, 161–168). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: Wireless endoscopy is a very recent and at the same time unique technique allowing to visualize and study the occurrence of con- tractions and to analyze the intestine motility. Feature extraction is es- sential for getting efficient patterns to detect contractions in wireless video endoscopy of small intestine. We propose a novel method based on anisotropic image filtering and efficient statistical classification of con- traction features. In particular, we apply the image gradient tensor for mining informative skeletons from the original image and a sequence of descriptors for capturing the characteristic pattern of contractions. Fea- tures extracted from the endoluminal images were evaluated in terms of their discriminatory ability in correct classifying images as either belong- ing to contractions or not. Classification was performed by means of a support vector machine classifier with a radial basis function kernel. Our classification rates gave sensitivity of the order of 90.84% and specificity of the order of 94.43% respectively. These preliminary results highlight the high efficiency of the selected descriptors and support the feasibility of the proposed method in assisting the automatic detection and analysis of contractions.
|
Debora Gil, Oriol Rodriguez-Leor, Petia Radeva, & Aura Hernandez-Sabate. (2007). Assessing Artery Motion Compensation in IVUS. In Computer Analysis Of Images And Patterns (Vol. 4673, pp. 213–220). Lecture Notes in Computer Science. Heidelberg: Springerlink.
Abstract: Cardiac dynamics suppression is a main issue for visual improvement and computation of tissue mechanical properties in IntraVascular UltraSound (IVUS). Although in recent times several motion compensation techniques have arisen, there is a lack of objective evaluation of motion reduction in in vivo pullbacks. We consider that the assessment protocol deserves special attention for the sake of a clinical applicability as reliable as possible. Our work focuses on defining a quality measure and a validation protocol assessing IVUS motion compensation. On the grounds of continuum mechanics laws we introduce a novel score measuring motion reduction in in vivo sequences. Synthetic experiments validate the proposed score as measure of motion parameters accuracy; while results in in vivo pullbacks show its reliability in clinical cases.
Keywords: validation standards; quality measures; IVUS motion compensation; conservation laws; Fourier development
|
J. Elder, Fadi Dornaika, Y. Hou, & R. Goldstein. (2005). Attentive wide-field sensing for visual telepresence and surveillance. In L. Itti, G. Rees and J. Tsotsos (editors), Neurobiology of Attention, Academic Press / Elsevier.
|
Pau Baiget, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2007). Automatic Learning of Conceptual Knowledge for the Interpretation of Human Behavior in Video Sequences. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:507–514.
|
Jordina Torrents-Barrena, Aida Valls, Petia Radeva, Meritxell Arenas, & Domenec Puig. (2015). Automatic Recognition of Molecular Subtypes of Breast Cancer in X-Ray images using Segmentation-based Fractal Texture Analysis. In Artificial Intelligence Research and Development (Vol. 277, pp. 247–256). Frontiers in Artificial Intelligence and Applications. IOS Press.
Abstract: Breast cancer disease has recently been classified into four subtypes regarding the molecular properties of the affected tumor region. For each patient, an accurate diagnosis of the specific type is vital to decide the most appropriate therapy in order to enhance life prospects. Nowadays, advanced therapeutic diagnosis research is focused on gene selection methods, which are not robust enough. Hence, we hypothesize that computer vision algorithms can offer benefits to address the problem of discriminating among them through X-Ray images. In this paper, we propose a novel approach driven by texture feature descriptors and machine learning techniques. First, we segment the tumour part through an active contour technique and then, we perform a complete fractal analysis to collect qualitative information of the region of interest in the feature extraction stage. Finally, several supervised and unsupervised classifiers are used to perform multiclass classification of the aforementioned data. The experimental results presented in this paper support that it is possible to establish a relation between each tumor subtype and the extracted features of the patterns revealed on mammograms.
|
Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2008). Autonomous Virtual Agents for Performance Evaluation of Tracking Algorithms. In Articulated Motion and Deformable Objects, 5th International Conference AMDO 2008, (Vol. 5098, pp. 299–308). LNCS.
|
Joaquin Salas, P. Martinez, & Jordi Gonzalez. (2006). Background Updating with the Use of Intrinsic Curves. In International Conference on Image Analysis and Recognition (ICIAR´06), LNCS 4141 (A. Campilho et al., eds.), 1: 731–742, ISBN 978–3–540–44891–4.
|
Jun Wan, Guodong Guo, Sergio Escalera, Hugo Jair Escalante, & Stan Z Li. (2023). Best Solutions Proposed in the Context of the Face Anti-spoofing Challenge Series. In Advances in Face Presentation Attack Detection (37–78).
Abstract: The PAD competitions we organized attracted more than 835 teams from home and abroad, most of them from the industry, which shows that the topic of face anti-spoofing is closely related to daily life, and there is an urgent need for advanced algorithms to solve its application needs. Specifically, the Chalearn LAP multi-modal face anti-spoofing attack detection challenge attracted more than 300 teams for the development phase with a total of 13 teams qualifying for the final round; the Chalearn Face Anti-spoofing Attack Detection Challenge attracted 340 teams in the development stage, and finally, 11 and 8 teams have submitted their codes in the single-modal and multi-modal face anti-spoofing recognition challenges, respectively; the 3D High-Fidelity Mask Face Presentation Attack Detection Challenge attracted 195 teams for the development phase with a total of 18 teams qualifying for the final round. All the results were verified and re-run by the organizing team, and the results were used for the final ranking. In this chapter, we briefly the methods developed by the teams participating in each competition, and introduce the algorithm details of the top-three ranked teams in detail.
|