|
Records |
Links |
|
Author |
Jose Manuel Alvarez; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Novel Index for Objective Evaluation of Road Detection Algorithms |
Type |
Conference Article |
|
Year |
2008 |
Publication |
Intelligent Transportation Systems. 11th International IEEE Conference on, |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
815–820 |
|
|
Keywords |
road detection |
|
|
Abstract |
|
|
|
Address |
Beijing (Xina) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ AlL2008 |
Serial |
1074 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Multiple-target tracking for the intelligent headlights control |
Type |
Conference Article |
|
Year |
2010 |
Publication |
13th Annual International Conference on Intelligent Transportation Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
903–910 |
|
|
Keywords |
Intelligent Headlights |
|
|
Abstract |
TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm. |
|
|
Address |
Madeira Island (Portugal) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RSL2010 |
Serial |
1422 |
|
Permanent link to this record |
|
|
|
|
Author |
Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Vehicle geolocalization based on video synchronization |
Type |
Conference Article |
|
Year |
2010 |
Publication |
13th Annual International Conference on Intelligent Transportation Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1511–1516 |
|
|
Keywords |
video alignment |
|
|
Abstract |
TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters. |
|
|
Address |
Madeira Island (Portugal) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2153-0009 |
ISBN |
978-1-4244-7657-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ DPS2010 |
Serial |
1423 |
|
Permanent link to this record |
|
|
|
|
Author |
Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Vision-based road detection via on-line video registration |
Type |
Conference Article |
|
Year |
2010 |
Publication |
13th Annual International Conference on Intelligent Transportation Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1135–1140 |
|
|
Keywords |
video alignment; road detection |
|
|
Abstract |
TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region. |
|
|
Address |
Madeira Island (Portugal) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2153-0009 |
ISBN |
978-1-4244-7657-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ DAS2010 |
Serial |
1424 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Camera Egomotion Estimation in the ADAS Context |
Type |
Conference Article |
|
Year |
2010 |
Publication |
13th International IEEE Annual Conference on Intelligent Transportation Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1415–1420 |
|
|
Keywords |
|
|
|
Abstract |
Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain. |
|
|
Address |
Madeira Island (Portugal) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2153-0009 |
ISBN |
978-1-4244-7657-2 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ CPL2010 |
Serial |
1425 |
|
Permanent link to this record |
|
|
|
|
Author |
David Aldavert; Ricardo Toledo; Arnau Ramisa; Ramon Lopez de Mantaras |
![goto web page url](img/www.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Efficient Object Pixel-Level Categorization using Bag of Features: Advances in Visual Computing |
Type |
Conference Article |
|
Year |
2009 |
Publication |
5th International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
5875 |
Issue |
|
Pages |
44–55 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we present a pixel-level object categorization method suitable to be applied under real-time constraints. Since pixels are categorized using a bag of features scheme, the major bottleneck of such an approach would be the feature pooling in local histograms of visual words. Therefore, we propose to bypass this time-consuming step and directly obtain the score from a linear Support Vector Machine classifier. This is achieved by creating an integral image of the components of the SVM which can readily obtain the classification score for any image sub-window with only 10 additions and 2 products, regardless of its size. Besides, we evaluated the performance of two efficient feature quantization methods: the Hierarchical K-Means and the Extremely Randomized Forest. All experiments have been done in the Graz02 database, showing comparable, or even better results to related work with a lower computational cost. |
|
|
Address |
Las Vegas, USA |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-10330-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ ATR2009a |
Serial |
1246 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; Fadi Dornaika |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Natural Facial Expression Recognition Using Dynamic and Static Schemes |
Type |
Conference Article |
|
Year |
2009 |
Publication |
5th International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
5875 |
Issue |
|
Pages |
730–739 |
|
|
Keywords |
|
|
|
Abstract |
Affective computing is at the core of a new paradigm in HCI and AI represented by human-centered computing. Within this paradigm, it is expected that machines will be enabled with perceiving capabilities, making them aware about users’ affective state. The current paper addresses the problem of facial expression recognition from monocular videos sequences. We propose a dynamic facial expression recognition scheme, which is proven to be very efficient. Furthermore, it is conveniently compared with several static-based systems adopting different magnitude of facial expression. We provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM). We also provide performance evaluations using arbitrary test video sequences. |
|
|
Address |
Las Vegas, USA |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-10330-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaD2009 |
Serial |
1257 |
|
Permanent link to this record |
|
|
|
|
Author |
Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II |
Abbreviated Journal |
|
|
|
Volume |
9475 |
Issue |
|
Pages |
463-473 |
|
|
Keywords |
Projector-camera systems; Feature descriptors; Object recognition |
|
|
Abstract |
Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-319-27862-9 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ SMG2015 |
Serial |
2736 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; Dimosthenis Karatzas; Josep Llados |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Automatic Verification of Properly Signed Multi-page Document Images |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Proceedings of the Eleventh International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
9475 |
Issue |
|
Pages |
327-336 |
|
|
Keywords |
Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow |
|
|
Abstract |
In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages. |
|
|
Address |
Las Vegas, Nevada, USA; December 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
9475 |
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
DAG; 600.077 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3189 |
|
Permanent link to this record |
|
|
|
|
Author |
Henry Velesaca; Patricia Suarez; Dario Carpio; Angel Sappa |
![goto web page url](img/www.gif)
|
|
Title |
Synthesized Image Datasets: Towards an Annotation-Free Instance Segmentation Strategy |
Type |
Conference Article |
|
Year |
2021 |
Publication |
16th International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
13017 |
Issue |
|
Pages |
131–143 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a complete pipeline to perform deep learning-based instance segmentation of different types of grains (e.g., corn, sunflower, soybeans, lentils, chickpeas, mote, and beans). The proposed approach consists of using synthesized image datasets for the training process, which are easily generated according to the category of the instance to be segmented. The synthesized imaging process allows generating a large set of well-annotated grain samples with high variability—as large and high as the user requires. Instance segmentation is performed through a popular deep learning based approach, the Mask R-CNN architecture, but any learning-based instance segmentation approach can be considered. Results obtained by the proposed pipeline show that the strategy of using synthesized image datasets for training instance segmentation helps to avoid the time-consuming image annotation stage, as well as to achieve higher intersection over union and average precision performances. Results obtained with different varieties of grains are shown, as well as comparisons with manually annotated images, showing both the simplicity of the process and the improvements in the performance. |
|
|
Address |
Virtual; October 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
MSIAU |
Approved |
no |
|
|
Call Number |
Admin @ si @ VSC2021 |
Serial |
3667 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Suarez; Dario Carpio; Angel Sappa |
![goto web page url](img/www.gif)
|
|
Title |
Non-homogeneous Haze Removal Through a Multiple Attention Module Architecture |
Type |
Conference Article |
|
Year |
2021 |
Publication |
16th International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
13018 |
Issue |
|
Pages |
178–190 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel attention based architecture to remove non-homogeneous haze. The proposed model is focused on obtaining the most representative characteristics of the image, at each learning cycle, by means of adaptive attention modules coupled with a residual learning convolutional network. The latter is based on the Res2Net model. The proposed architecture is trained with just a few set of images. Its performance is evaluated on a public benchmark—images from the non-homogeneous haze NTIRE 2021 challenge—and compared with state of the art approaches reaching the best result. |
|
|
Address |
Virtual; October 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
MSIAU |
Approved |
no |
|
|
Call Number |
Admin @ si @ SCS2021 |
Serial |
3668 |
|
Permanent link to this record |
|
|
|
|
Author |
Patricia Suarez; Angel Sappa; Dario Carpio; Henry Velesaca; Francisca Burgos; Patricia Urdiales |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Deep Learning Based Shrimp Classification |
Type |
Conference Article |
|
Year |
2022 |
Publication |
17th International Symposium on Visual Computing |
Abbreviated Journal |
|
|
|
Volume |
13598 |
Issue |
|
Pages |
36–45 |
|
|
Keywords |
Pigmentation; Color space; Light weight network |
|
|
Abstract |
This work proposes a novel approach based on deep learning to address the classification of shrimp (Pennaeus vannamei) into two classes, according to their level of pigmentation accepted by shrimp commerce. The main goal of this actual study is to support the shrimp industry in terms of price and process. An efficient CNN architecture is proposed to perform image classification through a program that could be set other in mobile devices or in fixed support in the shrimp supply chain. The proposed approach is a lightweight model that uses HSV color space shrimp images. A simple pipeline shows the most important stages performed to determine a pattern that identifies the class to which they belong based on their pigmentation. For the experiments, a database acquired with mobile devices of various brands and models has been used to capture images of shrimp. The results obtained with the images in the RGB and HSV color space allow for testing the effectiveness of the proposed model. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISVC |
|
|
Notes |
MSIAU; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ SAC2022 |
Serial |
3772 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Vilariño; Dimosthenis Karatzas; Alberto Valcarce |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Libraries as New Innovation Hubs: The Library Living Lab |
Type |
Conference Article |
|
Year |
2018 |
Publication |
30th ISPIM Innovation Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Libraries are in deep transformation both in EU and around the world, and they are thriving within a great window of opportunity for innovation. In this paper, we show how the Library Living Lab in Barcelona participated of this changing scenario and contributed to create the Bibliolab program, where more than 200 public libraries give voice to their users in a global user-centric innovation initiative, using technology as enabling factor. The Library Living Lab is a real 4-helix implementation where Universities, Research Centers, Public Administration, Companies and the Neighbors are joint together to explore how technology transforms the cultural experience of people. This case is an example of scalability and provides reference tools for policy making, sustainability, user engage methodologies and governance. We provide specific examples of new prototypes and services that help to understand how to redefine the role of the Library as a real hub for social innovation. |
|
|
Address |
Stockholm; May 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISPIM |
|
|
Notes |
DAG; MV; 600.097; 600.121; 600.129;SIAI |
Approved |
no |
|
|
Call Number |
Admin @ si @ VKV2018b |
Serial |
3154 |
|
Permanent link to this record |
|
|
|
|
Author |
Corina Krauter; Ursula Reiter; Albrecht Schmidt; Marc Masana; Rudolf Stollberger; Michael Fuchsjager; Gert Reiter |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Objective extraction of the temporal evolution of the mitral valve vortex ring from 4D flow MRI |
Type |
Conference Article |
|
Year |
2019 |
Publication |
27th Annual Meeting & Exhibition of the International Society for Magnetic Resonance in Medicine |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The mitral valve vortex ring is a promising flow structure for analysis of diastolic function, however, methods for objective extraction of its formation to dissolution are lacking. We present a novel algorithm for objective extraction of the temporal evolution of the mitral valve vortex ring from magnetic resonance 4D flow data and validated the method against visual analysis. The algorithm successfully extracted mitral valve vortex rings during both early- and late-diastolic filling and agreed substantially with visual assessment. Early-diastolic mitral valve vortex ring properties differed between healthy subjects and patients with ischemic heart disease. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISMRM |
|
|
Notes |
LAMP; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KRS2019 |
Serial |
3300 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Torras; Arnau Baro; Lei Kang; Alicia Fornes |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
On the Integration of Language Models into Sequence to Sequence Architectures for Handwritten Music Recognition |
Type |
Conference Article |
|
Year |
2021 |
Publication |
International Society for Music Information Retrieval Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
690-696 |
|
|
Keywords |
|
|
|
Abstract |
Despite the latest advances in Deep Learning, the recognition of handwritten music scores is still a challenging endeavour. Even though the recent Sequence to Sequence(Seq2Seq) architectures have demonstrated its capacity to reliably recognise handwritten text, their performance is still far from satisfactory when applied to historical handwritten scores. Indeed, the ambiguous nature of handwriting, the non-standard musical notation employed by composers of the time and the decaying state of old paper make these scores remarkably difficult to read, sometimes even by trained humans. Thus, in this work we explore the incorporation of language models into a Seq2Seq-based architecture to try to improve transcriptions where the aforementioned unclear writing produces statistically unsound mistakes, which as far as we know, has never been attempted for this field of research on this architecture. After studying various Language Model integration techniques, the experimental evaluation on historical handwritten music scores shows a significant improvement over the state of the art, showing that this is a promising research direction for dealing with such difficult manuscripts. |
|
|
Address |
Virtual; November 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
ISMIR |
|
|
Notes |
DAG; 600.140; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ TBK2021 |
Serial |
3616 |
|
Permanent link to this record |