Susana Alvarez, Xavier Otazu, & Maria Vanrell. (2005). Image Segmentation Based on Inter-Feature Distance Maps. In Frontiers in Artificial Intelligence and Applications, IOS Press, 131: 75–82.
|
Jordi Vitria, Petia Radeva, & I. Aguilo. (2004). Recent Advances in Artificial Intelligence Research and Development. In Frontiers in Artificial Intelligence and Applications, 113, J. Vitria, P. Radeva, I. Aguilo (Eds.), ISBN: 1–58603–466–9.
|
Jordi Gonzalez, J. Varona, Xavier Roca, & Juan J. Villanueva. (2003). A Human Action Comparison Framework for Motion Understanding.
|
Jordi Gonzalez, J. Varona, Xavier Roca, & Juan J. Villanueva. (2003). Human Sequence Evaluation: towards Knowledge-based Scene Interpretations.
|
Xavier Baro, & Jordi Vitria. (2005). Feature Selection with Non-Parametric Mutual Information for Adaboost Learning. In Frontiers in Artificial Intelligence and Applications / Artificial intelligence Research and Development, 131:131–138, Eds: B. Lopez, J. Melendez, P. Radeva, J. Vitria, IOS Press, ISBN: 1–58603–560–6.
|
Estefania Talavera, Alexandre Cola, Nicolai Petkov, & Petia Radeva. (2019). Towards Egocentric Person Re-identification and Social Pattern Analysis. In Frontiers in Artificial Intelligence and Applications (Vol. 310, pp. 203–211).
Abstract: CoRR abs/1905.04073
Wearable cameras capture a first-person view of the daily activities of the camera wearer, offering a visual diary of the user behaviour. Detection of the appearance of people the camera user interacts with for social interactions analysis is of high interest. Generally speaking, social events, lifestyle and health are highly correlated, but there is a lack of tools to monitor and analyse them. We consider that egocentric vision provides a tool to obtain information and understand users social interactions. We propose a model that enables us to evaluate and visualize social traits obtained by analysing social interactions appearance within egocentric photostreams. Given sets of egocentric images, we detect the appearance of faces within the days of the camera wearer, and rely on clustering algorithms to group their feature descriptors in order to re-identify persons. Recurrence of detected faces within photostreams allows us to shape an idea of the social pattern of behaviour of the user. We validated our model over several weeks recorded by different camera wearers. Our findings indicate that social profiles are potentially useful for social behaviour interpretation.
|
Gemma Sanchez, Josep Llados, & K. Tombre. (2001). An Error-Correction Graph Grammar to Recognize Textured Symbols..
|
Albert Andaluz, Francesc Carreras, Debora Gil, & Jaume Garcia. (2010). Una aplicació amigable pel càlcul de indicadors clínics del ventricle esquerre. Barcelona: Biocat.
|
Mohammad Momeny, Ali Asghar Neshat, Ahmad Jahanbakhshi, Majid Mahmoudi, Yiannis Ampatzidis, & Petia Radeva. (2023). Grading and fraud detection of saffron via learning-to-augment incorporated Inception-v4 CNN. FC - Food Control, 147, 109554.
Abstract: Saffron is a well-known product in the food industry. It is one of the spices that are sometimes adulterated with the sole motive of gaining more economic profit. Today, machine vision systems are widely used in controlling the quality of food and agricultural products as a new, non-destructive, and inexpensive approach. In this study, a machine vision system based on deep learning was used to detect fraud and saffron quality. A dataset of 1869 images was created and categorized in 6 classes including: dried saffron stigma using a dryer; dried saffron stigma using pressing method; pure stem of saffron; sunflower; saffron stem mixed with food coloring; and corn silk mixed with food coloring. A Learning-to-Augment incorporated Inception-v4 Convolutional Neural Network (LAII-v4 CNN) was developed for grading and fraud detection of saffron in images captured by smartphones. The best policies of data augmentation were selected with the proposed LAII-v4 CNN using images corrupted by Gaussian, speckle, and impulse noise to address overfitting the model. The proposed LAII-v4 CNN compared with regular CNN-based methods and traditional classifiers. Ensemble of Bagged Decision Trees, Ensemble of Boosted Decision Trees, k-Nearest Neighbor, Random Under-sampling Boosted Trees, and Support Vector Machine were used for classification of the features extracted by Histograms of Oriented Gradients and Local Binary Patterns, and selected by the Principal Component Analysis. The results showed that the proposed LAII-v4 CNN with an accuracy of 99.5% has achieved the best performance by employing batch normalization, Dropout, and leaky ReLU.
|
Jordi Gonzalez, & Thomas B. Moeslund. (2008). Tracking Humans for the Evaluation of their Motion in Image Sequences.
|
Carles Fernandez, Pau Baiget, & Jordi Gonzalez. (2008). Cognitive-Guided Semantic Exploitation in Video Surveillance Interfaces. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences. BMVC 2008, (53–60).
|
Pau Baiget, Eric Sommerlade, I. Reid, & Jordi Gonzalez. (2008). Finding Prototypes to Estimate Trajectory Development in Outdoor Scenarios. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences BMVC 2008, (27–34).
|
Ognjen Rudovic, & Xavier Roca. (2008). Building Temporale Templates for Human Behaviour Classification. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences BMVC 2008, (79–88).
|
E. Pastor, A. Agueda, Juan Andrade, M. Muñoz, Y. Perez, & E. Planas. (2006). Computing the rate of spread of linear flame fronts by thermal image processing. Fire Safety Journal, 41(8):569–579.
|
Fadi Dornaika, & Bogdan Raducanu. (2012). Analysis and Recognition of Facial Expressions in Videos Using Facial Shape Deformation. In S.E. Carter (Ed.), Facial Expressions: Dynamic Patterns, Impairments and Social Perceptions (pp. 157–178). NOVA Publishers.
|