|
D. Rincon, E. Frumento, & M. Angel Viñas. (1999). Description of a teleconsultation platform and its interaction with access networks. In V Open European Summer School. 145–150..
|
|
|
A. Auge, Javier Varona, & Juan J. Villanueva. (1997). Tumour Segmentation in Mammographies with Neural Networks. Application to Tumoural Volume Approximation. In Proceedings of the VII NSPRIA, Vol. II, CVC–UAB.
|
|
|
Felipe Lumbreras, & Joan Serrat. (1996). Segmentation of petrographical images of marbles. Computers and Geosciences, 22(5), 547–558.
|
|
|
A.F. Sole, S. Ngan, G. Sapiro, X. Hu, & Antonio Lopez. (2001). Anisotropic 2-D and 3-D Averaging of fMRI Signals. IEEE Transactions on Medical Imaging, 2020(2), 86–93.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2008). Rank Estimation in 3D Multibody Motion Segmentation. Electronic Letters, 44(4), 279–280.
Abstract: A novel technique for rank estimation in 3D multibody motion segmentation is proposed. It is based on the study of the frequency spectra of moving rigid objects and does not use or assume a prior knowledge of the objects contained in the scene (i.e. number of objects and motion). The significance of rank estimation on multibody motion segmentation results is shown by using two motion segmentation algorithms over both synthetic and real data.
|
|
|
Joan Serrat, Ferran Diego, Felipe Lumbreras, Jose Manuel Alvarez, Antonio Lopez, & C. Elvira. (2008). Dynamic Comparison of Headlights. Journal of Automobile Engineering, 222(5), 643–656.
Keywords: video alignment
|
|
|
Jose Manuel Alvarez, & Antonio Lopez. (2011). Road Detection Based on Illuminant Invariance. TITS - IEEE Transactions on Intelligent Transportation Systems, 12(1), 184–193.
Abstract: By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.
Keywords: road detection
|
|
|
C. Alejandro Parraga, Robert Benavente, & Maria Vanrell. (2010). Towards a general model of colour categorization which considers context. PER - Perception. ECVP Abstract Supplement, 39, 86.
Abstract: In two previous experiments [Parraga et al, 2009 J. of Im. Sci. and Tech 53(3) 031106; Benavente et al,2009 Perception 38 ECVP Supplement, 36] the boundaries of basic colour categories were measured.
In the first experiment, samples were presented in isolation (ie on a dark background) and boundaries were measured using a yes/no paradigm. In the second, subjects adjusted the chromaticity of a sample presented on a random Mondrian background to find the boundary between pairs of adjacent colours.
Results from these experiments showed significant dierences but it was not possible to conclude whether this discrepancy was due to the absence/presence of a colourful background or to the dierences in the paradigms used. In this work, we settle this question by repeating the first experiment (ie samples presented on a dark background) using the second paradigm. A comparison of results shows that
although boundary locations are very similar, boundaries measured in context are significantly dierent(more diuse) than those measured in isolation (confirmed by a Student’s t-test analysis on the subject’s answers statistical distributions). In addition, we completed the mapping of colour name space by measuring the boundaries between chromatic colours and the achromatic centre. With these results we
completed our parametric fuzzy-sets model of colour naming space.
|
|
|
Miquel Ferrer, Dimosthenis Karatzas, Ernest Valveny, I. Bardaji, & Horst Bunke. (2011). A Generic Framework for Median Graph Computation based on a Recursive Embedding Approach. CVIU - Computer Vision and Image Understanding, 115(7), 919–928.
Abstract: The median graph has been shown to be a good choice to obtain a represen- tative of a set of graphs. However, its computation is a complex problem. Recently, graph embedding into vector spaces has been proposed to obtain approximations of the median graph. The problem with such an approach is how to go from a point in the vector space back to a graph in the graph space. The main contribution of this paper is the generalization of this previ- ous method, proposing a generic recursive procedure that permits to recover the graph corresponding to a point in the vector space, introducing only the amount of approximation inherent to the use of graph matching algorithms. In order to evaluate the proposed method, we compare it with the set me- dian and with the other state-of-the-art embedding-based methods for the median graph computation. The experiments are carried out using four dif- ferent databases (one semi-artificial and three containing real-world data). Results show that with the proposed approach we can obtain better medi- ans, in terms of the sum of distances to the training graphs, than with the previous existing methods.
Keywords: Median Graph, Graph Embedding, Graph Matching, Structural Pattern Recognition
|
|
|
Manuel Carbonell, Alicia Fornes, Mauricio Villegas, & Josep Llados. (2020). A Neural Model for Text Localization, Transcription and Named Entity Recognition in Full Pages. PRL - Pattern Recognition Letters, 136, 219–227.
Abstract: In the last years, the consolidation of deep neural network architectures for information extraction in document images has brought big improvements in the performance of each of the tasks involved in this process, consisting of text localization, transcription, and named entity recognition. However, this process is traditionally performed with separate methods for each task. In this work we propose an end-to-end model that combines a one stage object detection network with branches for the recognition of text and named entities respectively in a way that shared features can be learned simultaneously from the training error of each of the tasks. By doing so the model jointly performs handwritten text detection, transcription, and named entity recognition at page level with a single feed forward step. We exhaustively evaluate our approach on different datasets, discussing its advantages and limitations compared to sequential approaches. The results show that the model is capable of benefiting from shared features by simultaneously solving interdependent tasks.
|
|
|
Francisco Alvaro, Francisco Cruz, Joan Andreu Sanchez, Oriol Ramos Terrades, & Jose Miguel Benedi. (2015). Structure Detection and Segmentation of Documents Using 2D Stochastic Context-Free Grammars. NEUCOM - Neurocomputing, 150(A), 147–154.
Abstract: In this paper we dene a bidimensional extension of Stochastic Context-Free Grammars for structure detection and segmentation of images of documents.
Two sets of text classication features are used to perform an initial classication of each zone of the page. Then, the document segmentation is obtained as the most likely hypothesis according to a stochastic grammar. We used a dataset of historical marriage license books to validate this approach. We also tested several inference algorithms for Probabilistic Graphical Models
and the results showed that the proposed grammatical model outperformed
the other methods. Furthermore, grammars also provide the document structure
along with its segmentation.
Keywords: document image analysis; stochastic context-free grammars; text classication features
|
|
|
Josep Llados, Ernest Valveny, & Enric Marti. (2000). Symbol Recognition in Document Image Analysis: Methods and Challenges. In Recent Research Developments in Pattern Recognition, Transworld Research Network, (Vol. 1, 151–178.).
|
|
|
Josep Llados, Enric Marti, & Juan J.Villanueva. (2001). Symbol recognition by error-tolerant subgraph matching between region adjacency graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10), 1137–1143.
Abstract: The recognition of symbols in graphic documents is an intensive research activity in the community of pattern recognition and document analysis. A key issue in the interpretation of maps, engineering drawings, diagrams, etc. is the recognition of domain dependent symbols according to a symbol database. In this work we first review the most outstanding symbol recognition methods from two different points of view: application domains and pattern recognition methods. In the second part of the paper, open and unaddressed problems involved in symbol recognition are described, analyzing their current state of art and discussing future research challenges. Thus, issues such as symbol representation, matching, segmentation, learning, scalability of recognition methods and performance evaluation are addressed in this work. Finally, we discuss the perspectives of symbol recognition concerning to new paradigms such as user interfaces in handheld computers or document database and WWW indexing by graphical content.
|
|
|
Hao Fang, Ajian Liu, Jun Wan, Sergio Escalera, Hugo Jair Escalante, & Zhen Lei. (2023). Surveillance Face Presentation Attack Detection Challenge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 6360–6370).
Abstract: Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, most of the studies lacked consideration of long-distance scenarios. Specifically, compared with FAS in traditional scenes such as phone unlocking, face payment, and self-service security inspection, FAS in long-distance such as station squares, parks, and self-service supermarkets are equally important, but it has not been sufficiently explored yet. In order to fill this gap in the FAS community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask). SuHiFiMask contains 10,195 videos from 101 subjects of different age groups, which are collected by 7 mainstream surveillance cameras. Based on this dataset and protocol-3 for evaluating the robustness of the algorithm under quality changes, we organized a face presentation attack detection challenge in surveillance scenarios. It attracted 180 teams for the development phase with a total of 37 teams qualifying for the final round. The organization team re-verified and re-ran the submitted code and used the results as the final ranking. In this paper, we present an overview of the challenge, including an introduction to the dataset used, the definition of the protocol, the evaluation metrics, and the announcement of the competition results. Finally, we present the top-ranked algorithms and the research ideas provided by the competition for attack detection in long-range surveillance scenarios.
|
|
|
Cristina Palmero, Javier Selva, Mohammad Ali Bagheri, & Sergio Escalera. (2018). Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues. In 29th British Machine Vision Conference.
Abstract: Gaze behavior is an important non-verbal cue in social signal processing and humancomputer interaction. In this paper, we tackle the problem of person- and head poseindependent 3D gaze estimation from remote cameras, using a multi-modal recurrent convolutional neural network (CNN). We propose to combine face, eyes region, and face landmarks as individual streams in a CNN to estimate gaze in still images. Then, we exploit the dynamic nature of gaze by feeding the learned features of all the frames in a sequence to a many-to-one recurrent module that predicts the 3D gaze vector of the last frame. Our multi-modal static solution is evaluated on a wide range of head poses and gaze directions, achieving a significant improvement of 14.6% over the state of the art on
EYEDIAP dataset, further improved by 4% when the temporal modality is included.
|
|