Q. Xue, Laura Igual, A. Berenguel, M. Guerrieri, & L. Garrido. (2014). Active Contour Segmentation with Affine Coordinate-Based Parametrization. In 9th International Conference on Computer Vision Theory and Applications (Vol. 1, pp. 5–14).
Abstract: In this paper, we present a new framework for image segmentation based on parametrized active contours. The contour and the points of the image space are parametrized using a set of reduced control points that have to form a closed polygon in two dimensional problems and a closed surface in three dimensional problems. By moving the control points, the active contour evolves. We use mean value coordinates as the parametrization tool for the interface, which allows to parametrize any point of the space, inside or outside the closed polygon
or surface. Region-based energies such as the one proposed by Chan and Vese can be easily implemented in both two and three dimensional segmentation problems. We show the usefulness of our approach with several experiments.
Keywords: Active Contours; Affine Coordinates; Mean Value Coordinates
|
Danna Xue, Fei Yang, Pei Wang, Luis Herranz, Jinqiu Sun, Yu Zhu, et al. (2022). SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision. In 30th ACM International Conference on Multimedia (pp. 6539–6548). Association for Computing Machinery.
Abstract: Accurate semantic segmentation models typically require significant computational resources, inhibiting their use in practical applications. Recent works rely on well-crafted lightweight models to achieve fast inference. However, these models cannot flexibly adapt to varying accuracy and efficiency requirements. In this paper, we propose a simple but effective slimmable semantic segmentation (SlimSeg) method, which can be executed at different capacities during inference depending on the desired accuracy-efficiency tradeoff. More specifically, we employ parametrized channel slimming by stepwise downward knowledge distillation during training. Motivated by the observation that the differences between segmentation results of each submodel are mainly near the semantic borders, we introduce an additional boundary guided semantic segmentation loss to further improve the performance of each submodel. We show that our proposed SlimSeg with various mainstream networks can produce flexible models that provide dynamic adjustment of computational cost and better performance than independent models. Extensive experiments on semantic segmentation benchmarks, Cityscapes and CamVid, demonstrate the generalization ability of our framework.
|
Agnes Borras, & Josep Llados. (2009). Corest: A measure of color and space stability to detect salient regions according to human criteria. In 5th International Conference on Computer Vision Theory and Applications (pp. 204–209).
|
Partha Pratim Roy, Josep Llados, & Umapada Pal. (2009). A Complete System for Detection and Recognition of Text in Graphical Documents using Background Information. In 5th International Conference on Computer Vision Theory and Applications.
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo, & Ramon Lopez de Mantaras. (2011). The IIIA30 MObile Robot Object Recognition Datset. In 11th Portuguese Robotics Open.
Abstract: Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.
|
Juan A. Carvajal Ayala, Dennis Romero, & Angel Sappa. (2016). Fine-tuning based deep convolutional networks for lepidopterous genus recognition. In 21st Ibero American Congress on Pattern Recognition (pp. 467–475). LNCS.
Abstract: This paper describes an image classification approach oriented to identify specimens of lepidopterous insects at Ecuadorian ecological reserves. This work seeks to contribute to studies in the area of biology about genus of butterflies and also to facilitate the registration of unrecognized specimens. The proposed approach is based on the fine-tuning of three widely used pre-trained Convolutional Neural Networks (CNNs). This strategy is intended to overcome the reduced number of labeled images. Experimental results with a dataset labeled by expert biologists is presented, reaching a recognition accuracy above 92%.
|
Julio C. S. Jacques Junior, Cagri Ozcinar, Marina Marjanovic, Xavier Baro, Gholamreza Anbarjafari, & Sergio Escalera. (2019). On the effect of age perception biases for real age regression. In 14th IEEE International Conference on Automatic Face and Gesture Recognition.
Abstract: Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer.
|
Daniel Sanchez, Meysam Madadi, Marc Oliu, & Sergio Escalera. (2019). Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation. In 14th IEEE International Conference on Automatic Face and Gesture Recognition.
Abstract: While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles.
|
Isabelle Guyon, Kristin Bennett, Gavin Cawley, Hugo Jair Escalante, Sergio Escalera, Tin Kam Ho, et al. (2015). AutoML Challenge 2015: Design and First Results. In 32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 (pp. 1–8).
Abstract: ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classication and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.
Keywords: AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning
|
C. Alejandro Parraga, Xavier Otazu, & Arash Akbarinia. (2019). Modelling symmetry perception with banks of quadrature convolutional Gabor kernels. In 42nd edition of the European Conference on Visual Perception (p. 224).
Abstract: Mirror symmetry is a property most likely to be encountered in animals than in medium scale vegetation or inanimate objects in the natural world. This might be the reason why the human visual system has evolved to detect it quickly and robustly. Indeed, the perception of symmetry assists higher-level visual processing that are crucial for survival such as target recognition and identification irrespective of position and location. Although the task of detecting symmetrical objects seems effortless to us, it is very challenging for computers (to the extent that it has been proposed as a robust “captcha” by Funk & Liu in 2016). Indeed, the exact mechanism of symmetry detection in primates is not well understood: fMRI studies have shown that symmetrical shapes activate specific higher-level areas of the visual cortex (Sasaki et al.; 2005) and similarly, a large body of psychophysical experiments suggest that the symmetry perception is critically influenced by low-level mechanisms (Treder; 2010). In this work we attempt to find plausible low-level mechanisms that might form the basis for symmetry perception. Our simple model is made from banks of (i) odd-symmetric Gabors (resembling edge-detecting V1 neurons); and (ii) banks of larger odd- and even-symmetric Gabors (resembling higher visual cortex neurons), that pool signals from the 'edge image'. As reported previously (Akbarinia et al, ECVP2017), the convolution of the symmetrical lines with the two Gabor kernels of alternative phase produces a minimum in one and a maximum in the other (Osorio; 1996), and the rectification and combination of these signals create lines which hint of mirror symmetry in natural images. We improved the algorithm by combining these signals across several spatial scales. Our preliminary results suggest that such multiscale combination of convolutional operations might form the basis for much of the operation of the HVS in terms of symmetry detection and representation.
|
Craig Von Land, Ricardo Toledo, & Juan J. Villanueva. (1996). Object Oriented Design of the DICOM standard.
|
Jordi Gonzalez, & Thomas B. Moeslund. (2008). Tracking Humans for the Evaluation of their Motion in Image Sequences.
|
Ognjen Rudovic, & Xavier Roca. (2008). Building Temporale Templates for Human Behaviour Classification. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences BMVC 2008, (79–88).
|
Carles Fernandez, Pau Baiget, & Jordi Gonzalez. (2008). Cognitive-Guided Semantic Exploitation in Video Surveillance Interfaces. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences. BMVC 2008, (53–60).
|
Pau Baiget, Eric Sommerlade, I. Reid, & Jordi Gonzalez. (2008). Finding Prototypes to Estimate Trajectory Development in Outdoor Scenarios. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences BMVC 2008, (27–34).
|