|
A. Auge, Javier Varona, & Juan J. Villanueva. (1997). Tumour Segmentation in Mammographies with Neural Networks. Application to Tumoural Volume Approximation. Proceedings of the VII NSPRIA, Vol. II, CVC–UAB, .
|
|
|
A. Martinez, & Jordi Vitria. (1995). Designing and Implementing Real Walking Agents using Virtual Environments.
|
|
|
A. Martinez, & Jordi Vitria. (1995). A Development Plataform for Autonomous Agents. ASI–AA–95 – Practice and Future of Autonomous Agents., .
|
|
|
A. Martinez, & Jordi Vitria. (2000). Learning mixture models using a genetic version of the EM algorithm. PRL - Pattern Recognition Letters, 21(8), 759–769.
|
|
|
A. Pujol, Jordi Vitria, Felipe Lumbreras, & Juan J. Villanueva. (2001). Topological principal component analysis for face encoding and recognition. PRL - Pattern Recognition Letters, 22(6-7), 769–776.
|
|
|
A. Pujol, & Juan J. Villanueva. (1996). Desarrollo de una interface basada en la utilizacion de redes neuronales aplicadas a la clasificacion de las respuestas electroencefalograficas a estimulos visuales. XIV Congreso anual de la sociedad española de ingenieria biomedica, .
|
|
|
A. Sanfeliu, & Juan J. Villanueva. (2005). An approach of visual motion analysis. PRL - Pattern Recognition Letters, 26(3), 355–368.
|
|
|
A.F. Sole, Antonio Lopez, & G. Sapiro. (2001). Crease Enhancement Diffusion. Computer Vision and Image Understanding, 84(2): 241–248 (IF: 1.298), .
|
|
|
A.F. Sole, S. Ngan, G. Sapiro, X. Hu, & Antonio Lopez. (2001). Anisotropic 2-D and 3-D Averaging of fMRI Signals. IEEE Transactions on Medical Imaging, 20(2): 86–93 (IF: 3.142), .
|
|
|
A.S. Coquel, Jean-Pascal Jacob, M. Primet, A. Demarez, Mariella Dimiccoli, T. Julou, et al. (2013). Localization of protein aggregation in Escherichia coli is governed by diffusion and nucleoid macromolecular crowding effect. PCB - Plos Computational Biology, 9(4).
Abstract: Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results provide evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3D individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion-aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of “soft” intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.
|
|
|
Adriana Romero, Carlo Gatta, & Gustavo Camps-Valls. (2016). Unsupervised Deep Feature Extraction for Remote Sensing Image Classification. TGRS - IEEE Transaction on Geoscience and Remote Sensing, 54(3), 1349–1362.
Abstract: This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy.
|
|
|
Adriana Romero, Petia Radeva, & Carlo Gatta. (2015). Meta-parameter free unsupervised sparse feature learning. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(8), 1716–1722.
Abstract: We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. Experiments on CIFAR-10, STL- 10 and UCMerced show that the method achieves the state-of-theart performance, providing discriminative features that generalize well.
|
|
|
Adrien Gaidon, Antonio Lopez, & Florent Perronnin. (2018). The Reasonable Effectiveness of Synthetic Visual Data. IJCV - International Journal of Computer Vision, 126(9), 899–901.
|
|
|
Adrien Pavao, Isabelle Guyon, Anne-Catherine Letournel, Dinh-Tuan Tran, Xavier Baro, Hugo Jair Escalante, et al. (2023). CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges. JMLR - Journal of Machine Learning Research, .
Abstract: CodaLab Competitions is an open source web platform designed to help data scientists and research teams to crowd-source the resolution of machine learning problems through the organization of competitions, also called challenges or contests. CodaLab Competitions provides useful features such as multiple phases, results and code submissions, multi-score leaderboards, and jobs running
inside Docker containers. The platform is very flexible and can handle large scale experiments, by allowing organizers to upload large datasets and provide their own CPU or GPU compute workers.
|
|
|
Aitor Alvarez-Gila, Adrian Galdran, Estibaliz Garrote, & Joost Van de Weijer. (2019). Self-supervised blur detection from synthetically blurred scenes. IMAVIS - Image and Vision Computing, 92, 103804.
Abstract: Blur detection aims at segmenting the blurred areas of a given image. Recent deep learning-based methods approach this problem by learning an end-to-end mapping between the blurred input and a binary mask representing the localization of its blurred areas. Nevertheless, the effectiveness of such deep models is limited due to the scarcity of datasets annotated in terms of blur segmentation, as blur annotation is labor intensive. In this work, we bypass the need for such annotated datasets for end-to-end learning, and instead rely on object proposals and a model for blur generation in order to produce a dataset of synthetically blurred images. This allows us to perform self-supervised learning over the generated image and ground truth blur mask pairs using CNNs, defining a framework that can be employed in purely self-supervised, weakly supervised or semi-supervised configurations. Interestingly, experimental results of such setups over the largest blur segmentation datasets available show that this approach achieves state of the art results in blur segmentation, even without ever observing any real blurred image.
|
|