|
J. Mauri and 14 others. 2000. Moviment del vas en l anàlisi d imatges d ecografia intracoronària: un model matemàtic. Congrés de la Societat Catalana de Cardiologia..
|
|
|
J. Mauri and 14 others. 2000. Avaluació del Conjunt Stent/Artèria mitjançant ecografia intracoronària: lentorn informàtic. Congrés de la Societat Catalana de Cardiologia..
|
|
|
Petia Radeva, Joan Serrat and Enric Marti. 1995. A snake for model-based segmentation. Proc. Conf. Fifth Int Computer Vision.816–821.
Abstract: Despite the promising results of numerous applications, the hitherto proposed snake techniques share some common problems: snake attraction by spurious edge points, snake degeneration (shrinking and attening), convergence and stability of the deformation process, snake initialization and local determination of the parameters of elasticity. We argue here that these problems can be solved only when all the snake aspects are considered. The snakes proposed here implement a new potential eld and external force in order to provide a deformation convergence, attraction by both near and far edges as well as snake behaviour selective according to the edge orientation. Furthermore, we conclude that in the case of model-based seg mentation, the internal force should include structural information about the expected snake shape. Experiments using this kind of snakes for segmenting bones in complex hand radiographs show a signicant improvement.
Keywords: snakes; elastic matching; model-based segmenta tion
|
|
|
Oriol Rodriguez-Leor and 10 others. 2002. Ecografia Intracoronària: Segmentació Automàtica de area de la llum. XXXVIII Congreso Nacional de la Sociedad Española de Cardiología..
|
|
|
Joan Serrat and Enric Marti. 1991. Elastic matching using interpolation splines. IV Spanish Symposium of Pattern Recognition and image Analysis.
|
|
|
Ernest Valveny, Ricardo Toledo, Ramon Baldrich and Enric Marti. 2002. Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching. Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002.502–507.
|
|
|
Mohamed Ramzy Ibrahim, Robert Benavente, Daniel Ponsa and Felipe Lumbreras. 2024. SWViT-RRDB: Shifted Window Vision Transformer Integrating Residual in Residual Dense Block for Remote Sensing Super-Resolution. 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications.
Abstract: Remote sensing applications, impacted by acquisition season and sensor variety, require high-resolution images. Transformer-based models improve satellite image super-resolution but are less effective than convolutional neural networks (CNNs) at extracting local details, crucial for image clarity. This paper introduces SWViT-RRDB, a new deep learning model for satellite imagery super-resolution. The SWViT-RRDB, combining transformer with convolution and attention blocks, overcomes the limitations of existing models by better representing small objects in satellite images. In this model, a pipeline of residual fusion group (RFG) blocks is used to combine the multi-headed self-attention (MSA) with residual in residual dense block (RRDB). This combines global and local image data for better super-resolution. Additionally, an overlapping cross-attention block (OCAB) is used to enhance fusion and allow interaction between neighboring pixels to maintain long-range pixel dependencies across the image. The SWViT-RRDB model and its larger variants outperform state-of-the-art (SoTA) models on two different satellite datasets in terms of PSNR and SSIM.
|
|
|
Ricardo Toledo, S. Sallent, J. Paradell and Juan J. Villanueva. 1995. CARE: Computer Assisted Radiology Environment. Pattern Recognition and image analysis: preprints of the VI National Symposium on Pattern Recogniotion & Image Analysis.
|
|
|
Mohammad Rouhani, E. Boyer and Angel Sappa. 2014. Non-Rigid Registration meets Surface Reconstruction. International Conference on 3D Vision.617–624.
Abstract: Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the outperformance and robustness of our framework. %in presence of noise and outliers.
|
|
|
Jose Carlos Rubio, Joan Serrat and Antonio Lopez. 2012. Video Co-segmentation. 11th Asian Conference on Computer Vision. Springer Berlin Heidelberg, 13–24. (LNCS.)
Abstract: Segmentation of a single image is in general a highly underconstrained problem. A frequent approach to solve it is to somehow provide prior knowledge or constraints on how the objects of interest look like (in terms of their shape, size, color, location or structure). Image co-segmentation trades the need for such knowledge for something much easier to obtain, namely, additional images showing the object from other viewpoints. Now the segmentation problem is posed as one of differentiating the similar object regions in all the images from the more varying background. In this paper, for the first time, we extend this approach to video segmentation: given two or more video sequences showing the same object (or objects belonging to the same class) moving in a similar manner, we aim to outline its region in all the frames. In addition, the method works in an unsupervised manner, by learning to segment at testing time. We compare favorably with two state-of-the-art methods on video segmentation and report results on benchmark videos.
|
|