J. Nuñez, Xavier Otazu, & M.T. Merino. (2005). A Multiresolution-Based Method for the Determination of the Relative Resolution between Images. First Application to Remote Sensing and Medical Images. International Journal of Imaging Systems and Technology, 15(5): 225–235 (IF: 0.439).
|
J. Nuñez, O. Fors, Xavier Otazu, Vicenç Pala, Roman Arbiol, & M.T. Merino. (2006). A Wavelet-Based Method for the Determination of the Relative Resolution Between Remotely Sensed Images. IEEE Transactions on Geoscience and Remote Sensing, 44(9): 2539–2548.
|
J. Mauri, Eduard Fernandez-Nofrerias, B. Garcia del Blanco, E. Iraculis, J.A. Gomez-Hospital, J. Comin, et al. (2000). Moviment del vas en l anàlisi d imatges d ecografia intracoronària: un model matemàtic. In Congrés de la Societat Catalana de Cardiologia..
|
J. Mauri, E. Esplugas, B. Garcia del Blanco, E Fernandez-Nofrerias, A. Cequier, J.A. Gomez-Hospital, et al. (2000). 3-D Stent and Vessel Reconstruction from IVUS: a Physics-Based Approach.
|
J. Mauri, E Fernandez-Nofrerias, Petia Radeva, & V. Valle. (2000). Ecografia intracoronaria, una ajuda o un mestre en lintervencionisme coronari?".
|
J. Mauri, E Fernandez-Nofrerias, E. Esplugas, A. Cequier, David Rotger, Ricardo Toledo, et al. (2000). Ecografia Intracoronaria: Navegacion Informatica por el cubo de datos de las imagenes..
|
J. Mauri, E Fernandez-Nofrerias, A. Tovar, E. Martinez, L. Cano, V. Valle, et al. (2001). Ecografia Intracoronaria: Un Nou Pas, la Fusio de Imatges amb la Angiografia, el Software. Revista de la Societat Catalana de Cardiologia, XIIIe Congres de la Societat Catalana de Cardiologia, 4(1):48., .
|
J. Mauri, Eduard Fernandez-Nofrerias, J. Comin, B. Garcia del Blanco, E. Iraculis, J.A. Gomez-Hospital, et al. (2000). Avaluació del Conjunt Stent/Artèria mitjançant ecografia intracoronària: lentorn informàtic. In Congrés de la Societat Catalana de Cardiologia..
|
J. Garcia, J.M. Sanchez, X. Orriols, & X. Binefa. (2000). Chromatic aberration and depth extraction. In 15 th International Conference on Pattern Recognition (Vol. 1, pp. 762–765).
|
J. Filipe, Juan Andrade, & J.L. Ferrier. (2005). FAF 2005.
|
J. Elder, Fadi Dornaika, Y. Hou, & R. Goldstein. (2005). Attentive wide-field sensing for visual telepresence and surveillance. In L. Itti, G. Rees and J. Tsotsos (editors), Neurobiology of Attention, Academic Press / Elsevier.
|
J. Chazalon, P. Gomez-Kramer, Jean-Christophe Burie, M.Coustaty, S.Eskenazi, Muhammad Muzzamil Luqman, et al. (2017). SmartDoc 2017 Video Capture: Mobile Document Acquisition in Video Mode. In 1st International Workshop on Open Services and Tools for Document Analysis.
Abstract: As mobile document acquisition using smartphones is getting more and more common, along with the continuous improvement of mobile devices (both in terms of computing power and image quality), we can wonder to which extent mobile phones can replace desktop scanners. Modern applications can cope with perspective distortion and normalize the contrast of a document page captured with a smartphone, and in some cases like bottle labels or posters, smartphones even have the advantage of allowing the acquisition of non-flat or large documents. However, several cases remain hard to handle, such as reflective documents (identity cards, badges, glossy magazine cover, etc.) or large documents for which some regions require an important amount of detail. This paper introduces the SmartDoc 2017 benchmark (named “SmartDoc Video Capture”), which aims at
assessing whether capturing documents using the video mode of a smartphone could solve those issues. The task under evaluation is both a stitching and a reconstruction problem, as the user can move the device over different parts of the document to capture details or try to erase highlights. The material released consists of a dataset, an evaluation method and the associated tool, a sample method, and the tools required to extend the dataset. All the components are released publicly under very permissive licenses, and we particularly cared about maximizing the ease of
understanding, usage and improvement.
|
J. Chazalon, Marçal Rusiñol, Jean-Marc Ogier, & Josep Llados. (2015). A Semi-Automatic Groundtruthing Tool for Mobile-Captured Document Segmentation. In 13th International Conference on Document Analysis and Recognition ICDAR2015 (pp. 621–625).
Abstract: This paper presents a novel way to generate groundtruth data for the evaluation of mobile document capture systems, focusing on the first stage of the image processing pipeline involved: document object detection and segmentation in lowquality preview frames. We introduce and describe a simple, robust and fast technique based on color markers which enables a semi-automated annotation of page corners. We also detail a technique for marker removal. Methods and tools presented in the paper were successfully used to annotate, in few hours, 24889
frames in 150 video files for the smartDOC competition at ICDAR 2015
|
J. Chazalon, Marçal Rusiñol, & Jean-Marc Ogier. (2015). Improving Document Matching Performance by Local Descriptor Filtering. In 6th IAPR International Workshop on Camera Based Document Analysis and Recognition CBDAR2015 (pp. 1216–1220).
Abstract: In this paper we propose an effective method aimed at reducing the amount of local descriptors to be indexed in a document matching framework. In an off-line training stage, the matching between the model document and incoming images is computed retaining the local descriptors from the model that steadily produce good matches. We have evaluated this approach by using the ICDAR2015 SmartDOC dataset containing near 25 000 images from documents to be captured by a mobile device. We have tested the performance of this filtering step by using
ORB and SIFT local detectors and descriptors. The results show an important gain both in quality of the final matching as well as in time and space requirements.
|
Ivo Everts, Jan van Gemert, & Theo Gevers. (2013). Evaluation of Color STIPs for Human Action Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2850–2857).
Abstract: This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition.
|