|
Records |
Links |
|
Author |
Razieh Rastgoo; Kourosh Kiani; Sergio Escalera |

|
|
Title |
A transformer model for boundary detection in continuous sign language |
Type |
Journal Article |
|
Year |
2024 |
Publication  |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Sign Language Recognition (SLR) has garnered significant attention from researchers in recent years, particularly the intricate domain of Continuous Sign Language Recognition (CSLR), which presents heightened complexity compared to Isolated Sign Language Recognition (ISLR). One of the prominent challenges in CSLR pertains to accurately detecting the boundaries of isolated signs within a continuous video stream. Additionally, the reliance on handcrafted features in existing models poses a challenge to achieving optimal accuracy. To surmount these challenges, we propose a novel approach utilizing a Transformer-based model. Unlike traditional models, our approach focuses on enhancing accuracy while eliminating the need for handcrafted features. The Transformer model is employed for both ISLR and CSLR. The training process involves using isolated sign videos, where hand keypoint features extracted from the input video are enriched using the Transformer model. Subsequently, these enriched features are forwarded to the final classification layer. The trained model, coupled with a post-processing method, is then applied to detect isolated sign boundaries within continuous sign videos. The evaluation of our model is conducted on two distinct datasets, including both continuous signs and their corresponding isolated signs, demonstrates promising results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ RKE2024 |
Serial |
4016 |
|
Permanent link to this record |
|
|
|
|
Author |
Hugo Jair Escalante; Victor Ponce; Sergio Escalera; Xavier Baro; Alicia Morales-Reyes; Jose Martinez-Carranza |


|
|
Title |
Evolving weighting schemes for the Bag of Visual Words |
Type |
Journal Article |
|
Year |
2017 |
Publication  |
Neural Computing and Applications |
Abbreviated Journal |
Neural Computing and Applications |
|
|
Volume |
28 |
Issue |
5 |
Pages |
925–939 |
|
|
Keywords |
Bag of Visual Words; Bag of features; Genetic programming; Term-weighting schemes; Computer vision |
|
|
Abstract |
The Bag of Visual Words (BoVW) is an established representation in computer vision. Taking inspiration from text mining, this representation has proved
to be very effective in many domains. However, in most cases, standard term-weighting schemes are adopted (e.g.,term-frequency or TF-IDF). It remains open the question of whether alternative weighting schemes could boost the
performance of methods based on BoVW. More importantly, it is unknown whether it is possible to automatically learn and determine effective weighting schemes from
scratch. This paper brings some light into both of these unknowns. On the one hand, we report an evaluation of the most common weighting schemes used in text mining, but rarely used in computer vision tasks. Besides, we propose an evolutionary algorithm capable of automatically learning weighting schemes for computer vision problems. We report empirical results of an extensive study in several computer vision problems. Results show the usefulness of the proposed method. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
Springer |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA;MV; no menciona;OR;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ EPE2017 |
Serial |
2743 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Garcia-Rodriguez; Isabelle Guyon; Sergio Escalera; Alexandra Psarrou; Andrew Lewis; Miguel Cazorla |

|
|
Title |
Editorial: Special Issue on Computational Intelligence for Vision and Robotics |
Type |
Journal Article |
|
Year |
2017 |
Publication  |
Neural Computing and Applications |
Abbreviated Journal |
Neural Computing and Applications |
|
|
Volume |
28 |
Issue |
5 |
Pages |
853–854 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ GGE2017 |
Serial |
2845 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniel Sanchez; Miguel Angel Bautista; Sergio Escalera |

|
|
Title |
HuPBA 8k+: Dataset and ECOC-GraphCut based Segmentation of Human Limbs |
Type |
Journal Article |
|
Year |
2015 |
Publication  |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
150 |
Issue |
A |
Pages |
173–188 |
|
|
Keywords |
Human limb segmentation; ECOC; Graph-Cuts |
|
|
Abstract |
Human multi-limb segmentation in RGB images has attracted a lot of interest in the research community because of the huge amount of possible applications in fields like Human-Computer Interaction, Surveillance, eHealth, or Gaming. Nevertheless, human multi-limb segmentation is a very hard task because of the changes in appearance produced by different points of view, clothing, lighting conditions, occlusions, and number of articulations of the human body. Furthermore, this huge pose variability makes the availability of large annotated datasets difficult. In this paper, we introduce the HuPBA8k+ dataset. The dataset contains more than 8000 labeled frames at pixel precision, including more than 120000 manually labeled samples of 14 different limbs. For completeness, the dataset is also labeled at frame-level with action annotations drawn from an 11 action dictionary which includes both single person actions and person-person interactive actions. Furthermore, we also propose a two-stage approach for the segmentation of human limbs. In a first stage, human limbs are trained using cascades of classifiers to be split in a tree-structure way, which is included in an Error-Correcting Output Codes (ECOC) framework to define a body-like probability map. This map is used to obtain a binary mask of the subject by means of GMM color modelling and GraphCuts theory. In a second stage, we embed a similar tree-structure in an ECOC framework to build a more accurate set of limb-like probability maps within the segmented user mask, that are fed to a multi-label GraphCut procedure to obtain final multi-limb segmentation. The methodology is tested on the novel HuPBA8k+ dataset, showing performance improvements in comparison to state-of-the-art approaches. In addition, a baseline of standard action recognition methods for the 11 actions categories of the novel dataset is also provided. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBE2015 |
Serial |
2552 |
|
Permanent link to this record |
|
|
|
|
Author |
Frederic Sampedro; Anna Domenech; Sergio Escalera |

|
|
Title |
Obtaining quantitative global tumoral state indicators based on whole-body PET/CT scans: A breast cancer case study |
Type |
Journal Article |
|
Year |
2014 |
Publication  |
Nuclear Medicine Communications |
Abbreviated Journal |
NMC |
|
|
Volume |
35 |
Issue |
4 |
Pages |
362-371 |
|
|
Keywords |
|
|
|
Abstract |
Objectives: In this work we address the need for the computation of quantitative global tumoral state indicators from oncological whole-body PET/computed tomography scans. The combination of such indicators with other oncological information such as tumor markers or biopsy results would prove useful in oncological decision-making scenarios.
Materials and methods: From an ordering of 100 breast cancer patients on the basis of oncological state through visual analysis by a consensus of nuclear medicine specialists, a set of numerical indicators computed from image analysis of the PET/computed tomography scan is presented, which attempts to summarize a patient’s oncological state in a quantitative manner taking into consideration the total tumor volume, aggressiveness, and spread.
Results: Results obtained by comparative analysis of the proposed indicators with respect to the experts’ evaluation show up to 87% Pearson’s correlation coefficient when providing expert-guided PET metabolic tumor volume segmentation and 64% correlation when using completely automatic image analysis techniques.
Conclusion: Global quantitative tumor information obtained by whole-body PET/CT image analysis can prove useful in clinical nuclear medicine settings and oncological decision-making scenarios. The completely automatic computation of such indicators would improve its impact as time efficiency and specialist independence would be achieved. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
SDE2014a |
Serial |
2444 |
|
Permanent link to this record |