|
Records |
Links |
|
Author |
Josep Llados; Daniel Lopresti; Seiichi Uchida (eds) |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
16th International Conference, 2021, Proceedings, Part I |
Type |
Book Whole |
|
Year |
2021 |
Publication |
Document Analysis and Recognition – ICDAR 2021 |
Abbreviated Journal |
|
|
|
Volume |
12821 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This four-volume set of LNCS 12821, LNCS 12822, LNCS 12823 and LNCS 12824, constitutes the refereed proceedings of the 16th International Conference on Document Analysis and Recognition, ICDAR 2021, held in Lausanne, Switzerland in September 2021. The 182 full papers were carefully reviewed and selected from 340 submissions, and are presented with 13 competition reports.
The papers are organized into the following topical sections: historical document analysis, document analysis systems, handwriting recognition, scene text detection and recognition, document image processing, natural language processing (NLP) for document understanding, and graphics, diagram and math recognition. |
|
|
Address |
Lausanne, Switzerland, September 5-10, 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Cham |
Place of Publication |
|
Editor |
Josep Llados; Daniel Lopresti; Seiichi Uchida |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-030-86548-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3725 |
|
Permanent link to this record |
|
|
|
|
Author |
Michael Teutsch; Angel Sappa; Riad I. Hammoud |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Cross-Spectral Image Processing |
Type |
Book Chapter |
|
Year |
2022 |
Publication |
Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
23-34 |
|
|
Keywords |
|
|
|
Abstract |
Although this book is on IR computer vision and its main focus lies on IR image and video processing and analysis, a special attention is dedicated to cross-spectral image processing due to the increasing number of publications and applications in this domain. In these cross-spectral frameworks, IR information is used together with information from other spectral bands to tackle some specific problems by developing more robust solutions. Tasks considered for cross-spectral processing are for instance dehazing, segmentation, vegetation index estimation, or face recognition. This increasing number of applications is motivated by cross- and multi-spectral camera setups available already on the market like for example smartphones, remote sensing multispectral cameras, or multi-spectral cameras for automotive systems or drones. In this chapter, different cross-spectral image processing techniques will be reviewed together with possible applications. Initially, image registration approaches for the cross-spectral case are reviewed: the registration stage is the first image processing task, which is needed to align images acquired by different sensors within the same reference coordinate system. Then, recent cross-spectral image colorization approaches, which are intended to colorize infrared images for different applications are presented. Finally, the cross-spectral image enhancement problem is tackled by including guided super resolution techniques, image dehazing approaches, cross-spectral filtering and edge detection. Figure 3.1 illustrates cross-spectral image processing stages as well as their possible connections. Table 3.1 presents some of the available public cross-spectral datasets generally used as reference data to evaluate cross-spectral image registration, colorization, enhancement, or exploitation results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
SLCV |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-00698-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSH2022b |
Serial |
3805 |
|
Permanent link to this record |
|
|
|
|
Author |
Michael Teutsch; Angel Sappa; Riad I. Hammoud |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Detection, Classification, and Tracking |
Type |
Book Chapter |
|
Year |
2022 |
Publication |
Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
35-58 |
|
|
Keywords |
|
|
|
Abstract |
Automatic image and video exploitation or content analysis is a technique to extract higher-level information from a scene such as objects, behavior, (inter-)actions, environment, or even weather conditions. The relevant information is assumed to be contained in the two-dimensional signal provided in an image (width and height in pixels) or the three-dimensional signal provided in a video (width, height, and time). But also intermediate-level information such as object classes [196], locations [197], or motion [198] can help applications to fulfill certain tasks such as intelligent compression [199], video summarization [200], or video retrieval [201]. Usually, videos with their temporal dimension are a richer source of data compared to single images [202] and thus certain video content can be extracted from videos only such as object motion or object behavior. Often, machine learning or nowadays deep learning techniques are utilized to model prior knowledge about object or scene appearance using labeled training samples [203, 204]. After a learning phase, these models are then applied in real world applications, which is called inference. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
SLCV |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-00698-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSH2022c |
Serial |
3806 |
|
Permanent link to this record |
|
|
|
|
Author |
Jorge Charco; Angel Sappa; Boris X. Vintimilla; Henry Velesaca |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Human Body Pose Estimation in Multi-view Environments |
Type |
Book Chapter |
|
Year |
2022 |
Publication |
ICT Applications for Smart Cities. Intelligent Systems Reference Library |
Abbreviated Journal |
|
|
|
Volume |
224 |
Issue |
|
Pages |
79-99 |
|
|
Keywords |
|
|
|
Abstract |
This chapter tackles the challenging problem of human pose estimation in multi-view environments to handle scenes with self-occlusions. The proposed approach starts by first estimating the camera pose—extrinsic parameters—in multi-view scenarios; due to few real image datasets, different virtual scenes are generated by using a special simulator, for training and testing the proposed convolutional neural network based approaches. Then, these extrinsic parameters are used to establish the relation between different cameras into the multi-view scheme, which captures the pose of the person from different points of view at the same time. The proposed multi-view scheme allows to robustly estimate human body joints’ position even in situations where they are occluded. This would help to avoid possible false alarms in behavioral analysis systems of smart cities, as well as applications for physical therapy, safe moving assistance for the elderly among other. The chapter concludes by presenting experimental results in real scenes by using state-of-the-art and the proposed multi-view approaches. |
|
|
Address |
September 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
ISRL |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-06306-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ CSV2022b |
Serial |
3810 |
|
Permanent link to this record |
|
|
|
|
Author |
Henry Velesaca; Patricia Suarez; Dario Carpio; Rafael E. Rivadeneira; Angel Sanchez; Angel Morera |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Video Analytics in Urban Environments: Challenges and Approaches |
Type |
Book Chapter |
|
Year |
2022 |
Publication |
ICT Applications for Smart Cities |
Abbreviated Journal |
|
|
|
Volume |
224 |
Issue |
|
Pages |
101-121 |
|
|
Keywords |
|
|
|
Abstract |
This chapter reviews state-of-the-art approaches generally present in the pipeline of video analytics on urban scenarios. A typical pipeline is used to cluster approaches in the literature, including image preprocessing, object detection, object classification, and object tracking modules. Then, a review of recent approaches for each module is given. Additionally, applications and datasets generally used for training and evaluating the performance of these approaches are included. This chapter does not pretend to be an exhaustive review of state-of-the-art video analytics in urban environments but rather an illustration of some of the different recent contributions. The chapter concludes by presenting current trends in video analytics in the urban scenario field. |
|
|
Address |
September 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
ISRL |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-06306-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ VSC2022 |
Serial |
3811 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel Sappa (ed) |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
ICT Applications for Smart Cities |
Type |
Book Whole |
|
Year |
2022 |
Publication |
ICT Applications for Smart Cities |
Abbreviated Journal |
|
|
|
Volume |
224 |
Issue |
|
Pages |
|
|
|
Keywords |
Computational Intelligence; Intelligent Systems; Smart Cities; ICT Applications; Machine Learning; Pattern Recognition; Computer Vision; Image Processing |
|
|
Abstract |
Part of the book series: Intelligent Systems Reference Library (ISRL)
This book is the result of four-year work in the framework of the Ibero-American Research Network TICs4CI funded by the CYTED program. In the following decades, 85% of the world's population is expected to live in cities; hence, urban centers should be prepared to provide smart solutions for problems ranging from video surveillance and intelligent mobility to the solid waste recycling processes, just to mention a few. More specifically, the book describes underlying technologies and practical implementations of several successful case studies of ICTs developed in the following smart city areas:
• Urban environment monitoring
• Intelligent mobility
• Waste recycling processes
• Video surveillance
• Computer-aided diagnose in healthcare systems
• Computer vision-based approaches for efficiency in production processes
The book is intended for researchers and engineers in the field of ICTs for smart cities, as well as to anyone who wants to know about state-of-the-art approaches and challenges on this field. |
|
|
Address |
September 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
Angel Sappa |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
ISRL |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-06306-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ Sap2022 |
Serial |
3812 |
|
Permanent link to this record |
|
|
|
|
Author |
Victoria Ruiz; Angel Sanchez; Jose F. Velez; Bogdan Raducanu |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Waste Classification with Small Datasets and Limited Resources |
Type |
Book Chapter |
|
Year |
2022 |
Publication |
ICT Applications for Smart Cities. Intelligent Systems Reference Library |
Abbreviated Journal |
|
|
|
Volume |
224 |
Issue |
|
Pages |
185-203 |
|
|
Keywords |
|
|
|
Abstract |
Automatic waste recycling has become a very important societal challenge nowadays, raising people’s awareness for a cleaner environment and a more sustainable lifestyle. With the transition to Smart Cities, and thanks to advanced ICT solutions, this problem has received a new impulse. The waste recycling focus has shifted from general waste treating facilities to an individual responsibility, where each person should become aware of selective waste separation. The surge of the mobile devices, accompanied by a significant increase in computation power, has potentiated and facilitated this individual role. An automated image-based waste classification mechanism can help with a more efficient recycling and a reduction of contamination from residuals. Despite the good results achieved with the deep learning methodologies for this task, the Achille’s heel is that they require large neural networks which need significant computational resources for training and therefore are not suitable for mobile devices. To circumvent this apparently intractable problem, we will rely on knowledge distillation in order to transfer the network’s knowledge from a larger network (called ‘teacher’) to a smaller, more compact one, (referred as ‘student’) and thus making it possible the task of image classification on a device with limited resources. For evaluation, we considered as ‘teachers’ large architectures such as InceptionResNet or DenseNet and as ‘students’, several configurations of the MobileNets. We used the publicly available TrashNet dataset to demonstrate that the distillation process does not significantly affect system’s performance (e.g. classification accuracy) of the student network. |
|
|
Address |
September 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
ISRL |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-06306-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3813 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergi Garcia Bordils; George Tom; Sangeeth Reddy; Minesh Mathew; Marçal Rusiñol; C.V. Jawahar; Dimosthenis Karatzas |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Read While You Drive-Multilingual Text Tracking on the Road |
Type |
Conference Article |
|
Year |
2022 |
Publication |
15th IAPR International workshop on document analysis systems |
Abbreviated Journal |
|
|
|
Volume |
13237 |
Issue |
|
Pages |
756–770 |
|
|
Keywords |
|
|
|
Abstract |
Visual data obtained during driving scenarios usually contain large amounts of text that conveys semantic information necessary to analyse the urban environment and is integral to the traffic control plan. Yet, research on autonomous driving or driver assistance systems typically ignores this information. To advance research in this direction, we present RoadText-3K, a large driving video dataset with fully annotated text. RoadText-3K is three times bigger than its predecessor and contains data from varied geographical locations, unconstrained driving conditions and multiple languages and scripts. We offer a comprehensive analysis of tracking by detection and detection by tracking methods exploring the limits of state-of-the-art text detection. Finally, we propose a new end-to-end trainable tracking model that yields state-of-the-art results on this challenging dataset. Our experiments demonstrate the complexity and variability of RoadText-3K and establish a new, realistic benchmark for scene text tracking in the wild. |
|
|
Address |
La Rochelle; France; May 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-06554-5 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DAS |
|
|
Notes |
DAG; 600.155; 611.022; 611.004 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GTR2022 |
Serial |
3783 |
|
Permanent link to this record |
|
|
|
|
Author |
Utkarsh Porwal; Alicia Fornes; Faisal Shafait (eds) |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Frontiers in Handwriting Recognition. International Conference on Frontiers in Handwriting Recognition. 18th International Conference, ICFHR 2022 |
Type |
Book Whole |
|
Year |
2022 |
Publication |
Frontiers in Handwriting Recognition. |
Abbreviated Journal |
|
|
|
Volume |
13639 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
ICFHR 2022, Hyderabad, India, December 4–7, 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
Utkarsh Porwal; Alicia Fornes; Faisal Shafait |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-21648-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICFHR |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
Admin @ si @ PFS2022 |
Serial |
3809 |
|
Permanent link to this record |
|
|
|
|
Author |
Andrea Gemelli; Sanket Biswas; Enrico Civitelli; Josep Llados; Simone Marinai |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Doc2Graph: A Task Agnostic Document Understanding Framework Based on Graph Neural Networks |
Type |
Conference Article |
|
Year |
2022 |
Publication |
17th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
13804 |
Issue |
|
Pages |
329–344 |
|
|
Keywords |
|
|
|
Abstract |
Geometric Deep Learning has recently attracted significant interest in a wide range of machine learning fields, including document analysis. The application of Graph Neural Networks (GNNs) has become crucial in various document-related tasks since they can unravel important structural patterns, fundamental in key information extraction processes. Previous works in the literature propose task-driven models and do not take into account the full power of graphs. We propose Doc2Graph, a task-agnostic document understanding framework based on a GNN model, to solve different tasks given different types of documents. We evaluated our approach on two challenging datasets for key information extraction in form understanding, invoice layout analysis and table detection. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-031-25068-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV-TiE |
|
|
Notes |
DAG; 600.162; 600.140; 110.312 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GBC2022 |
Serial |
3795 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Alireza Bosaghzadeh; Bogdan Raducanu |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Efficient Graph Construction for Label Propagation based Multi-observation Face Recognition |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Human Behavior Understanding 4th International Workshop |
Abbreviated Journal |
|
|
|
Volume |
8212 |
Issue |
|
Pages |
124-135 |
|
|
Keywords |
|
|
|
Abstract |
Workshop on Human Behavior Understanding
Human-machine interaction is a hot topic nowadays in the communities of multimedia and computer vision. In this context, face recognition algorithms (used as primary cue for a person’s identity assessment) work well under controlled conditions but degrade significantly when tested in real-world environments. Recently, graph-based label propagation for multi-observation face recognition was proposed. However, the associated graphs were constructed in an ad-hoc manner (e.g., using the KNN graph) that cannot adapt optimally to the data. In this paper, we propose a novel approach for efficient and adaptive graph construction that can be used for multi-observation face recognition as well as for other recognition problems. Experimental results performed on Honda video face database, show a distinct advantage of the proposed method over the standard graph construction methods. |
|
|
Address |
Barcelona |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-319-02713-5 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
HBU |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBR2013 |
Serial |
2315 |
|
Permanent link to this record |
|
|
|
|
Author |
Alex Pardo; Albert Clapes; Sergio Escalera; Oriol Pujol |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Actions in Context: System for people with Dementia |
Type |
Conference Article |
|
Year |
2013 |
Publication |
2nd International Workshop on Citizen Sensor Networks (Citisen2013) at the European Conference on Complex Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3-14 |
|
|
Keywords |
Multi-modal data Fusion; Computer vision; Wearable sensors; Gesture recognition; Dementia |
|
|
Abstract |
In the next forty years, the number of people living with dementia is expected to triple. In the last stages, people affected by this disease become dependent. This hinders the autonomy of the patient and has a huge social impact in time, money and effort. Given this scenario, we propose an ubiquitous system capable of recognizing daily specific actions. The system fuses and synchronizes data obtained from two complementary modalities – ambient and egocentric. The ambient approach consists in a fixed RGB-Depth camera for user and object recognition and user-object interaction, whereas the egocentric point of view is given by a personal area network (PAN) formed by a few wearable sensors and a smartphone, used for gesture recognition. The system processes multi-modal data in real-time, performing paralleled task recognition and modality synchronization, showing high performance recognizing subjects, objects, and interactions, showing its reliability to be applied in real case scenarios. |
|
|
Address |
Barcelona; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-319-04177-3 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCS |
|
|
Notes |
HUPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ PCE2013 |
Serial |
2354 |
|
Permanent link to this record |
|
|
|
|
Author |
Carles Sanchez; Jorge Bernal; Debora Gil; F. Javier Sanchez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
On-line lumen centre detection in gastrointestinal and respiratory endoscopy |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Second International Workshop Clinical Image-Based Procedures |
Abbreviated Journal |
|
|
|
Volume |
8361 |
Issue |
|
Pages |
31-38 |
|
|
Keywords |
Lumen centre detection; Bronchoscopy; Colonoscopy |
|
|
Abstract |
We present in this paper a novel lumen centre detection for gastrointestinal and respiratory endoscopic images. The proposed method is based on the appearance and geometry of the lumen, which we defined as the darkest image region which centre is a hub of image gradients. Experimental results validated on the first public annotated gastro-respiratory database prove the reliability of the method for a wide range of images (with precision over 95 %). |
|
|
Address |
Nagoya; Japan; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
Erdt, Marius and Linguraru, Marius George and Oyarzun Laura, Cristina and Shekhar, Raj and Wesarg, Stefan and González Ballester, Miguel Angel and Drechsler, Klaus |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-319-05665-4 |
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference |
CLIP |
|
|
Notes |
MV; IAM; 600.047; 600.044; 600.060 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBG2013 |
Serial |
2302 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Robust Head Gestures Recognition for Assistive Technology |
Type |
Book Chapter |
|
Year |
2014 |
Publication |
Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
8495 |
Issue |
|
Pages |
152-161 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neck-hung devices), in which case it can robustly recognize gestures in the presence of ego-motion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-319-07490-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSR2014b |
Serial |
2505 |
|
Permanent link to this record |
|
|
|
|
Author |
Oualid M. Benkarim; Petia Radeva; Laura Igual |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Label Consistent Multiclass Discriminative Dictionary Learning for MRI Segmentation |
Type |
Conference Article |
|
Year |
2014 |
Publication |
8th Conference on Articulated Motion and Deformable Objects |
Abbreviated Journal |
|
|
|
Volume |
8563 |
Issue |
|
Pages |
138-147 |
|
|
Keywords |
MRI segmentation; sparse representation; discriminative dic- tionary learning; multiclass classication |
|
|
Abstract |
The automatic segmentation of multiple subcortical structures in brain Magnetic Resonance Images (MRI) still remains a challenging task. In this paper, we address this problem using sparse representation and discriminative dictionary learning, which have shown promising results in compression, image denoising and recently in MRI segmentation. Particularly, we use multiclass dictionaries learned from a set of brain atlases to simultaneously segment multiple subcortical structures.
We also impose dictionary atoms to be specialized in one given class using label consistent K-SVD, which can alleviate the bias produced by unbalanced libraries, present when dealing with small structures. The proposed method is compared with other state of the art approaches for the segmentation of the Basal Ganglia of 35 subjects of a public dataset.
The promising results of the segmentation method show the eciency of the multiclass discriminative dictionary learning algorithms in MRI segmentation problems. |
|
|
Address |
Palma de Mallorca; July 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN ![sorted by ISBN field, ascending order (up)](img/sort_asc.gif) |
978-3-319-08848-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
AMDO |
|
|
Notes |
MILAB; OR |
Approved |
no |
|
|
Call Number |
Admin @ si @ BRI2014 |
Serial |
2494 |
|
Permanent link to this record |