|
Fernando Vilariño. (2017). Citizen experience as a powerful communication tool: Open Innovation and the role of Living Labs in EU. In European Conference of Science Journalists.
Abstract: The Open Innovation 2.0 model spearheaded by the European Commission introduces conceptual changes in how innovation processes should be developed. The notion of an innovation ecosystem, and the active participation of the citizens (and all the different actors of the quadruple helix) in innovation processes, opens up new channels for scientific communication, where the citizens (and all actors) can be naturally reached and facilitate the spread of the scientific message in their communities. Unleashing the power of such mechanisms, while maintaining control over the scientific communication done through such channels presents an opportunity and a challenge at the same time.
This workshop will look into key concepts that the Open Innovation 2.0 EU model introduces, and what new opportunities for communication they bring about. Specifically, we will focus on Living Labs, as a key instrument for implementing this innovation model at the regional level, and their potential in creating scientific dissemination spaces.
|
|
|
Antonio Lopez, & Joan Serrat. (1995). Image Analysis through Surface Geometric Descriptors.
|
|
|
Jordi Vitria, & J. Llacer. (1995). Recovering brightness and depth from focus using the Expectation-Maximization Algorithm..
|
|
|
D. Seron, F. Moreso, C. Gratin, & Jordi Vitria. (1995). Morphological Granulometries and Quantification of Interstitial Chronic Renal Damage.
|
|
|
C. Molina, & J.B. Subirana. (1995). Polynomial-Time Algorithm for 2D object recognition..
|
|
|
J.R. Serra, S. Casadei, & J.B. Subirana. (1995). Non-Cartesian Networks for Middle Level Vision..
|
|
|
David Fernandez, Pau Riba, Alicia Fornes, & Josep Llados. (2014). On the Influence of Key Point Encoding for Handwritten Word Spotting. In 14th International Conference on Frontiers in Handwriting Recognition (pp. 476–481).
Abstract: In this paper we evaluate the influence of the selection of key points and the associated features in the performance of word spotting processes. In general, features can be extracted from a number of characteristic points like corners, contours, skeletons, maxima, minima, crossings, etc. A number of descriptors exist in the literature using different interest point detectors. But the intrinsic variability of handwriting vary strongly on the performance if the interest points are not stable enough. In this paper, we analyze the performance of different descriptors for local interest points. As benchmarking dataset we have used the Barcelona Marriage Database that contains handwritten records of marriages over five centuries.
Keywords: Local descriptors; Interest points; Handwritten documents; Word spotting; Historical document analysis
|
|
|
David Fernandez, Jon Almazan, Nuria Cirera, Alicia Fornes, & Josep Llados. (2014). BH2M: the Barcelona Historical Handwritten Marriages database. In 22nd International Conference on Pattern Recognition (pp. 256–261).
Abstract: This paper presents an image database of historical handwritten marriages records stored in the archives of Barcelona cathedral, and the corresponding meta-data addressed to evaluate the performance of document analysis algorithms. The contribution of this paper is twofold. First, it presents a complete ground truth which covers the whole pipeline of handwriting
recognition research, from layout analysis to recognition and understanding. Second, it is the first dataset in the emerging area of genealogical document analysis, where documents are manuscripts pseudo-structured with specific lexicons and the interest is beyond pure transcriptions but context dependent.
|
|
|
Pau Riba, Jon Almazan, Alicia Fornes, David Fernandez, Ernest Valveny, & Josep Llados. (2014). e-Crowds: a mobile platform for browsing and searching in historical demographyrelated manuscripts. In 14th International Conference on Frontiers in Handwriting Recognition (pp. 228–233).
Abstract: This paper presents a prototype system running on portable devices for browsing and word searching through historical handwritten document collections. The platform adapts the paradigm of eBook reading, where the narrative is not necessarily sequential, but centered on the user actions. The novelty is to replace digitally born books by digitized historical manuscripts of marriage licenses, so document analysis tasks are required in the browser. With an active reading paradigm, the user can cast queries of people names, so he/she can implicitly follow genealogical links. In addition, the system allows combined searches: the user can refine a search by adding more words to search. As a second contribution, the retrieval functionality involves as a core technology a word spotting module with an unified approach, which allows combined query searches, and also two input modalities: query-by-example, and query-by-string.
|
|
|
Marco Pedersoli, Jordi Gonzalez, Andrew Bagdanov, & Juan J. Villanueva. (2010). Recursive Coarse-to-Fine Localization for fast Object Recognition. In 11th European Conference on Computer Vision (Vol. 6313, 280–293). LNCS. Springer Berlin Heidelberg.
Abstract: Cascading techniques are commonly used to speed-up the scan of an image for object detection. However, cascades of detectors are slow to train due to the high number of detectors and corresponding thresholds to learn. Furthermore, they do not use any prior knowledge about the scene structure to decide where to focus the search. To handle these problems, we propose a new way to scan an image, where we couple a recursive coarse-to-fine refinement together with spatial constraints of the object location. For doing that we split an image into a set of uniformly distributed neighborhood regions, and for each of these we apply a local greedy search over feature resolutions. The neighborhood is defined as a scanning region that only one object can occupy. Therefore the best hypothesis is obtained as the location with maximum score and no thresholds are needed. We present an implementation of our method using a pyramid of HOG features and we evaluate it on two standard databases, VOC2007 and INRIA dataset. Results show that the Recursive Coarse-to-Fine Localization (RCFL) achieves a 12x speed-up compared to standard sliding windows. Compared with a cascade of multiple resolutions approach our method has slightly better performance in speed and Average-Precision. Furthermore, in contrast to cascading approach, the speed-up is independent of image conditions, the number of detected objects and clutter.
|
|
|
Carles Fernandez, Jordi Gonzalez, & Xavier Roca. (2010). Automatic Learning of Background Semantics in Generic Surveilled Scenes. In 11th European Conference on Computer Vision (Vol. 6313, 678–692). LNCS. Springer Berlin Heidelberg.
Abstract: Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis.
|
|
|
Angel Sappa, & George A. Triantafyllid. (2012). Computer Graphics and Imaging.
|
|
|
Simone Balocco, Mauricio Gonzalez, Ricardo Ñancule, Petia Radeva, & Gabriel Thomas. (2018). Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets. In International Workshop on Artificial Intelligence and Pattern Recognition (Vol. 11047, pp. 34–42). LNCS.
Abstract: The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall.
Keywords: Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|
|
Mathieu Nicolas Delalandre, Tony Pridmore, Ernest Valveny, Eric Trupin, & Herve Locteau. (2007). Building Synthetic Graphical Documents for Performance Evaluation. In Seventh IAPR International Workshop on Graphics Recognition (84–87).
|
|