Petia Radeva, & M. Scoccianti. (2000). 3D Reconstruction of Abdominal Aortic Aneurysm.
|
Petia Radeva, Maya Dimitrova, Ch. Roumenin, David Rotger, D. Nikolov, & Juan J. Villanueva. (2004). Integration of Multiple Sensor Modalities in ActiveVessel Cardiology Workstation.
|
Petia Radeva, Ricardo Toledo, Craig Von Land, & Juan J. Villanueva. (1998). 3D Vessel Reconstruction from Biplane Angiograms using Snakes..
|
Petia Radeva, Ricardo Toledo, Craig Von Land, & Juan J. Villanueva. (1998). 3D Dynamic Model of the Coronary Tree..
|
Philippe Dosch, & Josep Llados. (2004). Vectorial Signatures for Symbol Discrimination.
|
Philippe Dosch, & Josep Llados. (2003). Vectorial Signatures for Symbol Discrimination.
|
Pierluigi Casale. (2008). Social Environment Description from Data Collected with a Wearable Device.
|
R. Herault, Franck Davoine, Fadi Dornaika, & Y. Grandvalet. (2006). Simultaneous and robust face and facial action tracking.
|
Ramon Baldrich, Maria Vanrell, Robert Benavente, & Anna Salvatella. (2003). Color Enhancement based on perceptual sharpening.
|
Ramon Baldrich, Ricardo Toledo, Ernest Valveny, & Maria Vanrell. (2002). Perceptual Colour Image Segmentation..
|
Razieh Rastgoo, Kourosh Kiani, & Sergio Escalera. (2022). Word separation in continuous sign language using isolated signs and post-processing.
Abstract: Continuous Sign Language Recognition (CSLR) is a long challenging task in Computer Vision due to the difficulties in detecting the explicit boundaries between the words in a sign sentence. To deal with this challenge, we propose a two-stage model. In the first stage, the predictor model, which includes a combination of CNN, SVD, and LSTM, is trained with the isolated signs. In the second stage, we apply a post-processing algorithm to the Softmax outputs obtained from the first part of the model in order to separate the isolated signs in the continuous signs. Due to the lack of a large dataset, including both the sign sequences and the corresponding isolated signs, two public datasets in Isolated Sign Language Recognition (ISLR), RKS-PERSIANSIGN and ASLVID, are used for evaluation. Results of the continuous sign videos confirm the efficiency of the proposed model to deal with isolated sign boundaries detection.
|
Razieh Rastgoo, Kourosh Kiani, & Sergio Escalera. (2022). A Non-Anatomical Graph Structure for isolated hand gesture separation in continuous gesture sequences.
Abstract: Continuous Hand Gesture Recognition (CHGR) has been extensively studied by researchers in the last few decades. Recently, one model has been presented to deal with the challenge of the boundary detection of isolated gestures in a continuous gesture video [17]. To enhance the model performance and also replace the handcrafted feature extractor in the presented model in [17], we propose a GCN model and combine it with the stacked Bi-LSTM and Attention modules to push the temporal information in the video stream. Considering the breakthroughs of GCN models for skeleton modality, we propose a two-layer GCN model to empower the 3D hand skeleton features. Finally, the class probabilities of each isolated gesture are fed to the post-processing module, borrowed from [17]. Furthermore, we replace the anatomical graph structure with some non-anatomical graph structures. Due to the lack of a large dataset, including both the continuous gesture sequences and the corresponding isolated gestures, three public datasets in Dynamic Hand Gesture Recognition (DHGR), RKS-PERSIANSIGN, and ASLVID, are used for evaluation. Experimental results show the superiority of the proposed model in dealing with isolated gesture boundaries detection in continuous gesture sequences
|
Razieh Rastgoo, Kourosh Kiani, Sergio Escalera, Vassilis Athitsos, & Mohammad Sabokrou. (2022). All You Need In Sign Language Production.
Abstract: Sign Language is the dominant form of communication language used in the deaf and hearing-impaired community. To make an easy and mutual communication between the hearing-impaired and the hearing communities, building a robust system capable of translating the spoken language into sign language and vice versa is fundamental.
To this end, sign language recognition and production are two necessary parts for making such a two-way system. Signlanguage recognition and production need to cope with some critical challenges. In this survey, we review recent advances in
Sign Language Production (SLP) and related areas using deep learning. To have more realistic perspectives to sign language, we present an introduction to the Deaf culture, Deaf centers, psychological perspective of sign language, the main differences between spoken language and sign language. Furthermore, we present the fundamental components of a bi-directional sign language translation system, discussing the main challenges in this area. Also, the backbone architectures and methods in SLP are briefly introduced and the proposed taxonomy on SLP is presented. Finally, a general framework for SLP and performance evaluation, and also a discussion on the recent developments, advantages, and limitations in SLP, commenting on possible lines for future research are presented.
Keywords: Sign Language Production; Sign Language Recog- nition; Sign Language Translation; Deep Learning; Survey; Deaf
|
Ricardo Toledo, Ramon Baldrich, Ernest Valveny, & Petia Radeva. (2002). Enhancing snakes for vessel detection in angiography images..
|
Ricardo Toledo, X. Orriols, X. Binefa, Petia Radeva, Jordi Vitria, & Juan J. Villanueva. (2000). Tracking Elongated Structures using Statistical Snakes..
|