|
Ognjen Rudovic, & Jordi Gonzalez. (2008). Building Temporal Templates for Human Behaviour Classification.
|
|
|
O.F.Ahmad, Y.Mori, M.Misawa, S.Kudo, J.T.Anderson, & Jorge Bernal. (2021). Establishing key research questions for the implementation of artificial intelligence in colonoscopy: a modified Delphi method. END - Endoscopy, 53(9), 893–901.
Abstract: BACKGROUND : Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. METHODS : An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers, from nine countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. RESULTS : The top 10 ranked questions were categorized into five themes. Theme 1: clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterization, determining the optimal end points for evaluation of AI, and demonstrating impact on interval cancer rates. Theme 2: technological developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false-positive rates, and minimizing latency. Theme 3: clinical adoption/integration (1 question), concerning the effective combination of detection and characterization into one workflow. Theme 4: data access/annotation (1 question), concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: regulatory approval (1 question), related to making regulatory approval processes more efficient. CONCLUSIONS : This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.
|
|
|
O. Rodriguez, J. Mauri, E Fernandez-Nofrerias, J. Lopez, A. Tovar, V. Valle, et al. (2003). Cuantificacion tridimensional de la longitud de segmentos coronarios a partir de secuencias de ecografia intracoronaria. Revista Española de Cardiologia (IF: 0.959), 56(2), Congreso de las Enfermedades Cardiovasculares.
|
|
|
O. Rodriguez, J. Mauri, E Fernandez-Nofrerias, J. Lopez, A. Tovar, R. Villuendas, et al. (2003). Modelo fisico para la simulacion de imagenes de ecografia intracoronaria. Revista Española de Cardiologia (IF: 0.959), 56(2), Congreso de las Enfermedades Cardiovasculares.
|
|
|
O. Rodriguez, J. Mauri, E Fernandez-Nofrerias, C. Garcia, R. Villuendas, A. Tovar, et al. (2003). Model Empiric de Simulacio d Ecografia Intravascular. Revista Societat Catalana de Cardiologia, 4(4):42, XIVe Congres de la Societat Catalana de Cardiologia, .
|
|
|
O. Rodriguez, J. Mauri, E Fernandez-Nofrerias, A. Tovar, R. Villuendas, V. Valle, et al. (2003). Analisis de texturas mediante la modificacion de un modelo binario local para la segmentacion automatica de secuencias de ecografia intracoronaria. Revista Española de Cardiologia (IF: 0.959), 56(2), Congreso de las Enfermedades Cardiovasculares.
|
|
|
O. Rodriguez, David Rotger, J. Mauri, E. Fernandez, V. Valle, & Petia Radeva. (2004). Active vessel workstation: three-dimensional reconstruction of coronary arteries by fusion of angiography and intravascular ultrasound.
|
|
|
O. Fors, Xavier Otazu, & J. Nuñez. (2001). Fusion Mediante Wavelets de Imagenes Spot-pan y del Satelite Tailandes TMSAT..
|
|
|
O. Fors, J. Nuñez, Xavier Otazu, A. Prades, & Robert D. Cardinal. (2010). Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques. SENS - Sensors, 10(3), 1743–1752.
Abstract: Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Keywords: image processing; image deconvolution; faint stars; space debris; wavelet transform
|
|
|
O. Fors, A. Richichi, Xavier Otazu, & J. Nuñez. (2008). A new wavelet-based approach for the automated treatment of large sets of lunar occultation data. Astronomy and Astrohysics, 297–304.
|
|
|
Nuria Cirera, Alicia Fornes, Volkmar Frinken, & Josep Llados. (2013). Hybrid grammar language model for handwritten historical documents recognition. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 117–124). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present a hybrid language model for the recognition of handwritten historical documents with a structured syntactical layout. Using a hidden Markov model-based recognition framework, a word-based grammar with a closed dictionary is enhanced by a character sequence recognition method. This allows to recognize out-of-dictionary words in controlled parts of the recognition, while keeping a closed vocabulary restriction for other parts. While the current status is work in progress, we can report an improvement in terms of character error rate.
|
|
|
Nuria Cirera, Alicia Fornes, & Josep Llados. (2015). Hidden Markov model topology optimization for handwriting recognition. In 13th International Conference on Document Analysis and Recognition ICDAR2015 (pp. 626–630).
Abstract: In this paper we present a method to optimize the topology of linear left-to-right hidden Markov models. These models are very popular for sequential signals modeling on tasks such as handwriting recognition. Many topology definition methods select the number of states for a character model based
on character length. This can be a drawback when characters are shorter than the minimum allowed by the model, since they can not be properly trained nor recognized. The proposed method optimizes the number of states per model by automatically including convenient skip-state transitions and therefore it avoids the aforementioned problem.We discuss and compare our method with other character length-based methods such the Fixed, Bakis and Quantile methods. Our proposal performs well on off-line handwriting recognition task.
|
|
|
Nuria Cirera. (2012). Recognition of Handwritten Historical Documents (Vol. 174). Master's thesis, , .
|
|
|
Noha Elfiky, Theo Gevers, Arjan Gijsenij, & Jordi Gonzalez. (2014). Color Constancy using 3D Scene Geometry derived from a Single Image. TIP - IEEE Transactions on Image Processing, 23(9), 3855–3868.
Abstract: The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g. grey-world and white patch assumption).
In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found
in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models; images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods is selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing
nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art
single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared to the best-performing single color constancy algorithm.
|
|
|
Noha Elfiky, Jordi Gonzalez, & Xavier Roca. (2012). Compact and Adaptive Spatial Pyramids for Scene Recognition. IMAVIS - Image and Vision Computing, 30(8), 492–500.
Abstract: Most successful approaches on scenerecognition tend to efficiently combine global image features with spatial local appearance and shape cues. On the other hand, less attention has been devoted for studying spatial texture features within scenes. Our method is based on the insight that scenes can be seen as a composition of micro-texture patterns. This paper analyzes the role of texture along with its spatial layout for scenerecognition. However, one main drawback of the resulting spatial representation is its huge dimensionality. Hence, we propose a technique that addresses this problem by presenting a compactSpatialPyramid (SP) representation. The basis of our compact representation, namely, CompactAdaptiveSpatialPyramid (CASP) consists of a two-stages compression strategy. This strategy is based on the Agglomerative Information Bottleneck (AIB) theory for (i) compressing the least informative SP features, and, (ii) automatically learning the most appropriate shape for each category. Our method exceeds the state-of-the-art results on several challenging scenerecognition data sets.
|
|