|
C. Alejandro Parraga, Robert Benavente, & Maria Vanrell. (2007). Modeling Colour-Naming Space with Fuzzy Sets. Perception 36:198–198, supp.
|
|
|
Cristina Cañero, Nikolaos Thomos, George A. Triantafyllid, George C. Litos, & Michael G. Strintzis. (2005). Mobile Tele-echography: User Interface Design. IEEE Transactions on Information Technology in Biomedicine, 9(1):44–49 (IF: 1.376).
|
|
|
Aura Hernandez-Sabate, Meritxell Joanpere, Nuria Gorgorio, & Lluis Albarracin. (2015). Mathematics learning opportunities when playing a Tower Defense Game. IJSG - International Journal of Serious Games, 57–71.
Abstract: A qualitative research study is presented herein with the purpose of identifying mathematics learning opportunities in students between 10 and 12 years old while playing a commercial version of a Tower Defense game. These learning opportunities are understood as mathematicisable moments of the game and involve the establishment of relationships between the game and mathematical problem solving. Based on the analysis of these mathematicisable moments, we conclude that the game can promote problem-solving processes and learning opportunities that can be associated with different mathematical contents that appears in mathematics curricula, thought it seems that teacher or new game elements might be needed to facilitate the processes.
Keywords: Tower Defense game; learning opportunities; mathematics; problem solving; game design
|
|
|
Joan Serrat, Ferran Diego, & Felipe Lumbreras. (2008). Los faros delanteros a traves del objetivo. UAB Divulga, Revista de divulgacion cientifica.
|
|
|
Bogdan Raducanu, & Jordi Vitria. (2008). Learning to Learn: From Smarts Machines to Intelligent Machines. PRL - Patter Recognition Letters, 1024–1032.
|
|
|
S. Tanimoto, N. Bruining, David Rotger, Petia Radeva, J. Ligthart, R.T. van Domburg, et al. (2008). Late Stent Recoil of the Bioabsorbable Everolimus Eluting Coronary Stent and its Relationship with Stent Struts Distribution and Plaque Morphology. Journal of the American College of Cardiology, vol. 52(20):1616–1620.
|
|
|
Alicia Fornes, Josep Llados, Oriol Ramos Terrades, & Marçal Rusiñol. (2016). La Visió per Computador com a Eina per a la Interpretació Automàtica de Fonts Documentals. Lligall, Revista Catalana d'Arxivística, 20–46.
|
|
|
Jian Yang, Alejandro F. Frangi, Jing-Yu Yang, David Zhang, & Zhong Jin. (2005). KPCA Plus LDA: A Complete Kernel Fisher Discriminant Framework for Feature Extraction and Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2):230–244 (IF: 3.810).
|
|
|
Zeynep Yucel, Albert Ali Salah, Çetin Meriçli, Tekin Meriçli, Roberto Valenti, & Theo Gevers. (2013). Joint Attention by Gaze Interpolation and Saliency. T-CIBER - IEEE Transactions on cybernetics, 829–842.
Abstract: Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.
|
|
|
Xavier Otazu, M. Gonzalez-Audicana, O. Fors, & J. Nuñez. (2005). Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods. IEEE Transactions on Geoscience and Remote Sensing, 43(10): 2376–2385 (IF: 1.627).
|
|
|
Oriol Rodriguez-Leor, E. Fernandez-Nofrerias, J. Mauri, C. Garcia, R. Villuendas, V. Valle, et al. (2003). Intravascular ultrasound segmentation using local binary patterns. European Heart Journal (IF: 5.997), ESC Congress 2003.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2008). Interpretation of Complex Situations in a Semantic-based Surveillance Framework. Signal Processing: Image Communication, Special Issue on Semantic Analysis for Interactive Multimedia Services, 554–569.
Abstract: The integration of cognitive capabilities in computer vision systems requires both to enable high semantic expressiveness and to deal with high computational costs as large amounts of data are involved in the analysis. This contribution describes a cognitive vision system conceived to automatically provide high-level interpretations of complex real-time situations in outdoor and indoor scenarios, and to eventually maintain communication with casual end users in multiple languages. The main contributions are: (i) the design of an integrative multilevel architecture for cognitive surveillance purposes; (ii) the proposal of a coherent taxonomy of knowledge to guide the process of interpretation, which leads to the conception of a situation-based ontology; (iii) the use of situational analysis for content detection and a progressive interpretation of semantically rich scenes, by managing incomplete or uncertain knowledge, and (iv) the use of such an ontological background to enable multilingual capabilities and advanced end-user interfaces. Experimental results are provided to show the feasibility of the proposed approach.
Keywords: Cognitive vision system; Situation analysis; Applied ontologies
|
|
|
Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, & Xavier Roca. (2019). Integrating Vision and Language in Social Networks for Identifying Visual Patterns of Personality Traits. IJSSH - International Journal of Social Science and Humanity, 6–12.
Abstract: Social media, as a major platform for communication and information exchange, is a rich repository of the opinions and sentiments of 2.3 billion users about a vast spectrum of topics. In this sense, user text interactions are widely used to sense the whys of certain social user’s demands and cultural- driven interests. However, the knowledge embedded in the 1.8 billion pictures which are uploaded daily in public profiles has just started to be exploited. Following this trend on visual-based social analysis, we present a novel methodology based on neural networks to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits. So, the key contribution in this work is to explore whether OCEAN personality trait modeling can be addressed based on images, here called MindPics, appearing with certain tags with psychological insights. We found that there is a correlation between posted images and the personality estimated from their accompanying texts. Thus, the experimental results are consistent with previous cyber-psychology results based on texts, suggesting that images could also be used for personality estimation: classification results on some personality traits show that specific and characteristic visual patterns emerge, in essence representing abstract concepts. These results open new avenues of research for further refining the proposed personality model under the supervision of psychology experts, and to further substitute current textual personality questionnaires by image-based ones.
|
|
|
I. King, & Zhong Jin. (2003). Integrated Probability Function and Its Application to Content-Based Image Retrieval By Relevance Feedback. Pattern Recognition, 36(9): 2177–2186 (IF: 1.611).
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2007). Inferring Facial Expressions from Videos: Tool and Application. Signal Processing: Image Communication, vol. 22(9):769–784.
|
|