Records |
Author |
Rafael E. Rivadeneira; Patricia Suarez; Angel Sappa; Boris X. Vintimilla |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Thermal Image SuperResolution Through Deep Convolutional Neural Network |
Type |
Conference Article |
Year |
2019 |
Publication |
16th International Conference on Images Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
417-426 |
Keywords |
|
Abstract |
Due to the lack of thermal image datasets, a new dataset has been acquired for proposed a super-resolution approach using a Deep Convolution Neural Network schema. In order to achieve this image enhancement process, a new thermal images dataset is used. Different experiments have been carried out, firstly, the proposed architecture has been trained using only images of the visible spectrum, and later it has been trained with images of the thermal spectrum, the results showed that with the network trained with thermal images, better results are obtained in the process of enhancing the images, maintaining the image details and perspective. The thermal dataset is available at http://www.
cidis.espol.edu.ec/es/dataset. |
Address |
Waterloo; Canada; August 2019 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICIAR |
Notes |
MSIAU; 600.130; 601.349; 600.122 |
Approved |
no |
Call Number |
Admin @ si @ RSS2019 |
Serial |
3269 |
Permanent link to this record |
|
|
|
Author |
X. Binefa; Jordi Vitria; Juan J. Villanueva |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Three dimensional inspection of integrated circuits: a depth from focus approach. |
Type |
Miscellaneous |
Year |
1992 |
Publication |
SPIE/IS&T Symposium on Electronic Imaging (Conference on Machine Vision in Microelectronics Manufacturing) |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ BVV1992a |
Serial |
250 |
Permanent link to this record |
|
|
|
Author |
Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Three-Dimensional Design of Error Correcting Output Codes |
Type |
Conference Article |
Year |
2012 |
Publication |
8th International Conference on Machine Learning and Data Mining |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
29- |
Keywords |
|
Abstract |
|
Address |
Berlin, Germany |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
MLDM |
Notes |
HuPBA;MILAB |
Approved |
no |
Call Number |
Admin @ si @ BGE2012a |
Serial |
2041 |
Permanent link to this record |
|
|
|
Author |
Fadi Dornaika; Bogdan Raducanu |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application |
Type |
Journal Article |
Year |
2009 |
Publication |
IEEE Transactions on Systems, Man and Cybernetics part B |
Abbreviated Journal |
TSMCB |
Volume |
39 |
Issue |
4 |
Pages |
935–944 |
Keywords |
|
Abstract |
Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DoR2009a |
Serial |
1218 |
Permanent link to this record |
|
|
|
Author |
Cristina Cañero; Petia Radeva; Oriol Pujol; Ricardo Toledo; Debora Gil; J. Saludes; Juan J. Villanueva; B. Garcia del Blanco; J. Mauri; Eduard Fernandez-Nofrerias; J.A. Gomez-Hospital; E. Iraculis; J. Comin; C. Quiles; F. Jara; A. Cequier; E.Esplugas |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Three-dimensional reconstruction and quantification of the coronary tree using intravascular ultrasound images |
Type |
Conference Article |
Year |
1999 |
Publication |
Proceedings of International Conference on Computer in Cardiology (CIC´99) |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
In this paper we propose a new Computer Vision technique to reconstruct the vascular wall in space using a deformable model-based technique and compounding methods, based in biplane angiography and intravascular ultrasound data jicsion. It is also proposed a generalpurpose three-dimensional guided interpolation method. The three dimensional centerline of the vessel is reconstructed from geometrically corrected biplane angiographies using automatic segmentation methods and snakes. The IVUS image planes are located in the threedimensional space and correctly oriented. A led interpolation method based in B-SurJaces and snakes isused to fill the gaps among image planes |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CINC99 |
Notes |
MILAB;RV;IAM;ADAS;HuPBA |
Approved |
no |
Call Number |
IAM @ iam @ CRP1999b |
Serial |
1492 |
Permanent link to this record |
|
|
|
Author |
Mireia Sole; Joan Blanco; Debora Gil; Oliver Valero; B. Cardenas; G. Fonseka; E. Anton; Alvaro Pascual; Richard Frodsham; Zaida Sarrate |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Time to match; when do homologous chromosomes become closer? |
Type |
Journal Article |
Year |
2022 |
Publication |
Chromosoma |
Abbreviated Journal |
CHRO |
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
In most eukaryotes, pairing of homologous chromosomes is an essential feature of meiosis that ensures homologous recombination and segregation. However, when the pairing process begins, it is still under investigation. Contrasting data exists in Mus musculus, since both leptotene DSB-dependent and preleptotene DSB-independent mechanisms have been described. To unravel this contention, we examined homologous pairing in pre-meiotic and meiotic Mus musculus cells using a threedimensional fuorescence in situ hybridization-based protocol, which enables the analysis of the entire karyotype using DNA painting probes. Our data establishes in an unambiguously manner that 73.83% of homologous chromosomes are already paired at premeiotic stages (spermatogonia-early preleptotene spermatocytes). The percentage of paired homologous chromosomes increases to 84.60% at mid-preleptotene-zygotene stage, reaching 100% at pachytene stage. Importantly, our results demonstrate a high percentage of homologous pairing observed before the onset of meiosis; this pairing does not occur randomly, as the percentage was higher than that observed in somatic cells (19.47%) and between nonhomologous chromosomes (41.1%). Finally, we have also observed that premeiotic homologous pairing is asynchronous and independent of the chromosome size, GC content, or presence of NOR regions. |
Address |
August, 2022 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM; 601.139; 600.145; 600.096 |
Approved |
no |
Call Number |
Admin @ si @ SBG2022 |
Serial |
3719 |
Permanent link to this record |
|
|
|
Author |
Xavier Soria; Yachuan Li; Mohammad Rouhani; Angel Sappa |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Tiny and Efficient Model for the Edge Detection Generalization |
Type |
Conference Article |
Year |
2023 |
Publication |
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Most high-level computer vision tasks rely on low-level image operations as their initial processes. Operations such as edge detection, image enhancement, and super-resolution, provide the foundations for higher level image analysis. In this work we address the edge detection considering three main objectives: simplicity, efficiency, and generalization since current state-of-the-art (SOTA) edge detection models are increased in complexity for better accuracy. To achieve this, we present Tiny and Efficient Edge Detector (TEED), a light convolutional neural network with only 58K parameters, less than 0:2% of the state-of-the-art models. Training on the BIPED dataset takes less than 30 minutes, with each epoch requiring less than 5 minutes. Our proposed model is easy to train and it quickly converges within very first few epochs, while the predicted edge-maps are crisp and of high quality. Additionally, we propose a new dataset to test the generalization of edge detection, which comprises samples from popular images used in edge detection and image segmentation. The source code is available in https://github.com/xavysp/TEED. |
Address |
Paris; France; October 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCVW |
Notes |
MSIAU |
Approved |
no |
Call Number |
Admin @ si @ SLR2023 |
Serial |
3941 |
Permanent link to this record |
|
|
|
Author |
T. Widemann; Xavier Otazu |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Titanias radius and an upper limit on its atmosphere from the September 8, 2001 stellar occultation |
Type |
Journal Article |
Year |
2009 |
Publication |
International Journal of Solar System Studies |
Abbreviated Journal |
|
Volume |
199 |
Issue |
2 |
Pages |
458–476 |
Keywords |
Occultations; Uranus, satellites; Satellites, shapes; Satellites, dynamics; Ices; Satellites, atmospheres |
Abstract |
On September 8, 2001 around 2 h UT, the largest uranian moon, Titania, occulted Hipparcos star 106829 (alias SAO 164538, a V=7.2, K0 III star). This was the first-ever observed occultation by this satellite, a rare event as Titania subtends only 0.11 arcsec on the sky. The star's unusual brightness allowed many observers, both amateurs or professionals, to monitor this unique event, providing fifty-seven occultations chords over three continents, all reported here. Selecting the best 27 occultation chords, and assuming a circular limb, we derive Titania's radius: View the MathML source (1-σ error bar). This implies a density of View the MathML source using the value View the MathML source derived by Taylor [Taylor, D.B., 1998. Astron. Astrophys. 330, 362–374]. We do not detect any significant difference between equatorial and polar radii, in the limit View the MathML source, in agreement with Voyager limb image retrieval during the 1986 flyby. Titania's offset with respect to the DE405 + URA027 (based on GUST86 theory) ephemeris is derived: ΔαTcos(δT)=−108±13 mas and ΔδT=−62±7 mas (ICRF J2000.0 system). Most of this offset is attributable to a Uranus' barycentric offset with respect to DE405, that we estimate to be: View the MathML source and ΔδU=−85±25 mas at the moment of occultation. This offset is confirmed by another Titania stellar occultation observed on August 1st, 2003, which provides an offset of ΔαTcos(δT)=−127±20 mas and ΔδT=−97±13 mas for the satellite. The combined ingress and egress data do not show any significant hint for atmospheric refraction, allowing us to set surface pressure limits at the level of 10–20 nbar. More specifically, we find an upper limit of 13 nbar (1-σ level) at 70 K and 17 nbar at 80 K, for a putative isothermal CO2 atmosphere. We also provide an upper limit of 8 nbar for a possible CH4 atmosphere, and 22 nbar for pure N2, again at the 1-σ level. We finally constrain the stellar size using the time-resolved star disappearance and reappearance at ingress and egress. We find an angular diameter of 0.54±0.03 mas (corresponding to View the MathML source projected at Titania). With a distance of 170±25 parsecs, this corresponds to a radius of 9.8±0.2 solar radii for HIP 106829, typical of a K0 III giant. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
ELSEVIER |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0019-1035 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
CIC |
Approved |
no |
Call Number |
CAT @ cat @ Wid2009 |
Serial |
1052 |
Permanent link to this record |
|
|
|
Author |
Armin Mehri; Parichehr Behjati; Angel Sappa |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
TnTViT-G: Transformer in Transformer Network for Guidance Super Resolution |
Type |
Journal Article |
Year |
2023 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
Volume |
11 |
Issue |
|
Pages |
11529-11540 |
Keywords |
|
Abstract |
Image Super Resolution is a potential approach that can improve the image quality of low-resolution optical sensors, leading to improved performance in various industrial applications. It is important to emphasize that most state-of-the-art super resolution algorithms often use a single channel of input data for training and inference. However, this practice ignores the fact that the cost of acquiring high-resolution images in various spectral domains can differ a lot from one another. In this paper, we attempt to exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). We propose a dual stream Transformer-based super resolution approach that uses the visible image as a guide to super-resolve another spectral band image. To this end, we introduce Transformer in Transformer network for Guidance super resolution, named TnTViT-G, an efficient and effective method that extracts the features of input images via different streams and fuses them together at various stages. In addition, unlike other guidance super resolution approaches, TnTViT-G is not limited to a fixed upsample size and it can generate super-resolved images of any size. Extensive experiments on various datasets show that the proposed model outperforms other state-of-the-art super resolution approaches. TnTViT-G surpasses state-of-the-art methods by up to 0.19∼2.3dB , while it is memory efficient. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MSIAU |
Approved |
no |
Call Number |
Admin @ si @ MBS2023 |
Serial |
3876 |
Permanent link to this record |
|
|
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Top-Down Color Attention for Object Recognition |
Type |
Conference Article |
Year |
2009 |
Publication |
12th International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
979 - 986 |
Keywords |
|
Abstract |
Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. |
Address |
Kyoto, Japan |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1550-5499 |
ISBN |
978-1-4244-4420-5 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
CIC |
Approved |
no |
Call Number |
CAT @ cat @ SWV2009 |
Serial |
1196 |
Permanent link to this record |
|
|
|
Author |
Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Top-Down Deep Appearance Attention for Action Recognition |
Type |
Conference Article |
Year |
2017 |
Publication |
20th Scandinavian Conference on Image Analysis |
Abbreviated Journal |
|
Volume |
10269 |
Issue |
|
Pages |
297-309 |
Keywords |
Action recognition; CNNs; Feature fusion |
Abstract |
Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches. |
Address |
Tromso; June 2017 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
SCIA |
Notes |
LAMP; 600.109; 600.068; 600.120 |
Approved |
no |
Call Number |
Admin @ si @ RKW2017b |
Serial |
3039 |
Permanent link to this record |
|
|
|
Author |
Meysam Madadi; Sergio Escalera; Alex Carruesco Llorens; Carlos Andujar; Xavier Baro; Jordi Gonzalez |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Top-down model fitting for hand pose recovery in sequences of depth images |
Type |
Journal Article |
Year |
2018 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
Volume |
79 |
Issue |
|
Pages |
63-75 |
Keywords |
|
Abstract |
State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. We evaluate our approach on a new created synthetic hand dataset along with NYU and MSRA real datasets. Results demonstrate that the proposed method outperforms the most recent pose recovering approaches, including those based on CNNs. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HUPBA; 600.098 |
Approved |
no |
Call Number |
Admin @ si @ MEC2018 |
Serial |
3203 |
Permanent link to this record |
|
|
|
Author |
Estefania Talavera; Carolin Wuerich; Nicolai Petkov; Petia Radeva |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Topic modelling for routine discovery from egocentric photo-streams |
Type |
Journal Article |
Year |
2020 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
Volume |
104 |
Issue |
|
Pages |
107330 |
Keywords |
Routine; Egocentric vision; Lifestyle; Behaviour analysis; Topic modelling |
Abstract |
Developing tools to understand and visualize lifestyle is of high interest when addressing the improvement of habits and well-being of people. Routine, defined as the usual things that a person does daily, helps describe the individuals’ lifestyle. With this paper, we are the first ones to address the development of novel tools for automatic discovery of routine days of an individual from his/her egocentric images. In the proposed model, sequences of images are firstly characterized by semantic labels detected by pre-trained CNNs. Then, these features are organized in temporal-semantic documents to later be embedded into a topic models space. Finally, Dynamic-Time-Warping and Spectral-Clustering methods are used for final day routine/non-routine discrimination. Moreover, we introduce a new EgoRoutine-dataset, a collection of 104 egocentric days with more than 100.000 images recorded by 7 users. Results show that routine can be discovered and behavioural patterns can be observed. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; no proj |
Approved |
no |
Call Number |
Admin @ si @ TWP2020 |
Serial |
3435 |
Permanent link to this record |
|
|
|
Author |
Eduard Vazquez; Ramon Baldrich; Javier Vazquez; Maria Vanrell |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Topological histogram reduction towards colour segmentation |
Type |
Book Chapter |
Year |
2007 |
Publication |
3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:55–62 |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
Gerona (Spain) |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
CIC |
Approved |
no |
Call Number |
CAT @ cat @ VBV2007 |
Serial |
809 |
Permanent link to this record |
|
|
|
Author |
A. Pujol; Jordi Vitria; Felipe Lumbreras; Juan J. Villanueva |
Title ![sorted by Title field, ascending order (up)](img/sort_asc.gif) |
Topological principal component analysis for face encoding and recognition |
Type |
Journal Article |
Year |
2001 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
Volume |
22 |
Issue |
6-7 |
Pages |
769–776 |
Keywords |
|
Abstract |
IF: 0.552 |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS;OR;MV |
Approved |
no |
Call Number |
ADAS @ adas @ PVL2001 |
Serial |
155 |
Permanent link to this record |