Home | [121–130] << 131 132 133 134 135 136 137 138 139 140 >> [141–150] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Santiago Segui; Oriol Pujol; Jordi Vitria | ||||
Title | Learning to count with deep object features | Type | Conference Article | ||
Year | 2015 | Publication | Deep Vision: Deep Learning in Computer Vision, CVPR 2015 Workshop | Abbreviated Journal | |
Volume | Issue | Pages | 90-96 | ||
Keywords | |||||
Abstract | Learning to count is a learning strategy that has been recently proposed in the literature for dealing with problems where estimating the number of object instances in a scene is the final objective. In this framework, the task of learning to detect and localize individual object instances is seen as a harder task that can be evaded by casting the problem as that of computing a regression value from hand-crafted image features. In this paper we explore the features that are learned when training a counting convolutional neural
network in order to understand their underlying representation. To this end we define a counting problem for MNIST data and show that the internal representation of the network is able to classify digits in spite of the fact that no direct supervision was provided for them during training. We also present preliminary results about a deep network that is able to count the number of pedestrians in a scene. |
||||
Address | Boston; USA; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MILAB; HuPBA; OR;MV | Approved | no | ||
Call Number ![]() |
Admin @ si @ SPV2015 | Serial | 2636 | ||
Permanent link to this record | |||||
Author | Md.Mostafa Kamal Sarker; , Hatem A. Rashwan; Farhan Akram; Syeda Furruka Banu; Adel Saleh; Vivek Kumar Singh; Forhad U. H. Chowdhury; Saddam Abdulwahab; Santiago Romani; Petia Radeva; Domenec Puig | ||||
Title | SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. | Type | Conference Article | ||
Year | 2018 | Publication | 21st International Conference on Medical Image Computing & Computer Assisted Intervention | Abbreviated Journal | |
Volume | 2 | Issue | Pages | 21-29 | |
Keywords | |||||
Abstract | Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for automated diagnosis of melanoma. In this paper, we present a robust deep learning SLS model, so-called SLSDeep, which is represented as an encoder-decoder network. The encoder network is constructed by dilated residual layers, in turn, a pyramid pooling network followed by three convolution layers is used for the decoder. Unlike the traditional methods employing a cross-entropy loss, we investigated a loss function by combining both Negative Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the melanoma regions with sharp boundaries. The robustness of the proposed model was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion analysis towards melanoma detection challenge. The proposed model outperforms the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is capable to segment more than 100 images of size 384x384 per second on a recent GPU. | ||||
Address | Granada; Espanya; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAI | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRA2018 | Serial | 3112 | ||
Permanent link to this record | |||||
Author | Md.Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig | ||||
Title | Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism | Type | Journal Article | ||
Year | 2019 | Publication | IEEE Access | Abbreviated Journal | ACCESS |
Volume | 7 | Issue | Pages | 39069-39082 | |
Keywords | |||||
Abstract | Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called “EgoFoodPlaces” that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the “EgoFoodPlaces” dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no menciona | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRA2019 | Serial | 3296 | ||
Permanent link to this record | |||||
Author | Md Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Vivek Kumar Singh; Syeda Furruka Banu; Forhad U H Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig; Mohamed Abdel-Nasser | ||||
Title | SLSNet: Skin lesion segmentation using a lightweight generative adversarial network | Type | Journal Article | ||
Year | 2021 | Publication | Expert Systems With Applications | Abbreviated Journal | ESWA |
Volume | 183 | Issue | Pages | 115433 | |
Keywords | |||||
Abstract | The determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRA2021 | Serial | 3633 | ||
Permanent link to this record | |||||
Author | Emanuel Sanchez Aimar; Petia Radeva; Mariella Dimiccoli | ||||
Title | Social Relation Recognition in Egocentric Photostreams | Type | Conference Article | ||
Year | 2019 | Publication | 26th International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 3227-3231 | ||
Keywords | |||||
Abstract | This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental's social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal. | ||||
Address | Taipei; Taiwan; September 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIP | ||
Notes | MILAB; no menciona | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRD2019 | Serial | 3370 | ||
Permanent link to this record | |||||
Author | E. Serradell; Adriana Romero; R. Leta; Carlo Gatta; Francesc Moreno-Noguer | ||||
Title | Simultaneous Correspondence and Non-Rigid 3D Reconstruction of the Coronary Tree from Single X-Ray Images | Type | Conference Article | ||
Year | 2011 | Publication | 13th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 850-857 | ||
Keywords | |||||
Abstract | |||||
Address | Barcelona | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | MILAB | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRL2011 | Serial | 1803 | ||
Permanent link to this record | |||||
Author | Maria Salamo; Inmaculada Rodriguez; Maite Lopez; Anna Puig; Simone Balocco; Mariona Taule | ||||
Title | Recurso docente para la atención de la diversidad en el aula mediante la predicción de notas | Type | Journal | ||
Year | 2016 | Publication | ReVision | Abbreviated Journal | |
Volume | 9 | Issue | 1 | Pages | |
Keywords | Aprendizaje automatico; Sistema de prediccion de notas; Herramienta docente | ||||
Abstract | Desde la implantación del Espacio Europeo de Educación Superior (EEES) en los diferentes grados, se ha puesto de manifiesto la necesidad de utilizar diversos mecanismos que permitan tratar la diversidad en el aula, evaluando automáticamente y proporcionando una retroalimentación rápida tanto al alumnado como al profesorado sobre la evolución de los alumnos en una asignatura. En este artículo se presenta la evaluación de la exactitud en las predicciones de GRADEFORESEER, un recurso docente para la predicción de notas basado en técnicas de aprendizaje automático que permite evaluar la evolución del alumnado y estimar su nota final al terminar el curso. Este recurso se ha complementado con una interfaz de usuario para el profesorado que puede ser usada en diferentes plataformas software (sistemas operativos) y en cualquier asignatura de un grado en la que se utilice evaluación continuada. Además de la descripción del recurso, este artículo presenta los resultados obtenidos al aplicar el sistema de predicción en cuatro asignaturas de disciplinas distintas: Programación I (PI), Diseño de Software (DSW) del grado de Ingeniería Informática, Tecnologías de la Información y la Comunicación (TIC) del grado de Lingüística y la asignatura Fundamentos de Tecnología (FDT) del grado de Información y Documentación, todas ellas impartidas en la Universidad de Barcelona.
La capacidad predictiva se ha evaluado de forma binaria (aprueba o no) y según un criterio de rango (suspenso, aprobado, notable o sobresaliente), obteniendo mejores predicciones en los resultados evaluados de forma binaria. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRL2016 | Serial | 2820 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Oriol Ramos Terrades; Patricia Marquez; Enric Marti; Jaume Rocarias; Debora Gil | ||||
Title | Evaluación automática de prácticas en Moodle para el aprendizaje autónomo en Ingenierías | Type | Miscellaneous | ||
Year | 2014 | Publication | 8th International Congress on University Teaching and Innovation | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Tarragona; juliol 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CIDUI | ||
Notes | IAM; 600.075;DAG | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRM2014 | Serial | 2458 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Oriol Ramos Terrades; Patricia Marquez; Enric Marti; J.Roncaries; Debora Gil | ||||
Title | Automatic evaluation of practices in Moodle for Self Learning in Engineering | Type | Journal | ||
Year | 2015 | Publication | Journal of Technology and Science Education | Abbreviated Journal | JOTSE |
Volume | 5 | Issue | 2 | Pages | 97-106 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; DAG; 600.075; 600.077 | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRM2015 | Serial | 2610 | ||
Permanent link to this record | |||||
Author | Stefan Schurischuster; Beatriz Remeseiro; Petia Radeva; Martin Kampel | ||||
Title | A Preliminary Study of Image Analysis for Parasite Detection on Honey Bees | Type | Conference Article | ||
Year | 2018 | Publication | 15th International Conference on Image Analysis and Recognition | Abbreviated Journal | |
Volume | 10882 | Issue | Pages | 465-473 | |
Keywords | |||||
Abstract | Varroa destructor is a parasite harming bee colonies. As the worldwide bee population is in danger, beekeepers as well as researchers are looking for methods to monitor the health of bee hives. In this context, we present a preliminary study to detect parasites on bee videos by means of image analysis and machine learning techniques. For this purpose, each video frame is analyzed individually to extract bee image patches, which are then processed to compute image descriptors and finally classified into mite and no mite bees. The experimental results demonstrated the adequacy of the proposed method, which will be a perfect stepping stone for a further bee monitoring system. | ||||
Address | Povoa de Varzim; Portugal; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIAR | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRR2018a | Serial | 3110 | ||
Permanent link to this record | |||||
Author | Md.Mostafa Kamal Sarker; Hatem A. Rashwan; Hatem A. Rashwan; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig | ||||
Title | MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams | Type | Conference Article | ||
Year | 2018 | Publication | European Conference on Computer Vision workshops | Abbreviated Journal | |
Volume | Issue | Pages | 423-433 | ||
Keywords | |||||
Abstract | First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LCNS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | MILAB; no menciona | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRR2018b | Serial | 3185 | ||
Permanent link to this record | |||||
Author | Albert Suso; Pau Riba; Oriol Ramos Terrades; Josep Llados | ||||
Title | A Self-supervised Inverse Graphics Approach for Sketch Parametrization | Type | Conference Article | ||
Year | 2021 | Publication | 16th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | 12916 | Issue | Pages | 28-42 | |
Keywords | |||||
Abstract | The study of neural generative models of handwritten text and human sketches is a hot topic in the computer vision field. The landmark SketchRNN provided a breakthrough by sequentially generating sketches as a sequence of waypoints, and more recent articles have managed to generate fully vector sketches by coding the strokes as Bézier curves. However, the previous attempts with this approach need them all a ground truth consisting in the sequence of points that make up each stroke, which seriously limits the datasets the model is able to train in. In this work, we present a self-supervised end-to-end inverse graphics approach that learns to embed each image to its best fit of Bézier curves. The self-supervised nature of the training process allows us to train the model in a wider range of datasets, but also to perform better after-training predictions by applying an overfitting process on the input binary image. We report qualitative an quantitative evaluations on the MNIST and the Quick, Draw! datasets. | ||||
Address | Lausanne; Suissa; September 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRR2021 | Serial | 3675 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Edgar Riba; Angel Sappa | ||||
Title | Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection | Type | Conference Article | ||
Year | 2020 | Publication | IEEE Winter Conference on Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper proposes a Deep Learning based edge detector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed approach generates thin edge-maps that are plausible for human eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contribution, a large dataset with carefully annotated edges has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing improvements with the proposed method when F-measure of ODS and OIS are considered. | ||||
Address | Aspen; USA; March 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WACV | ||
Notes | MSIAU; 600.130; 601.349; 600.122 | Approved | no | ||
Call Number ![]() |
Admin @ si @ SRS2020 | Serial | 3434 | ||
Permanent link to this record | |||||
Author | Xavier Soria; Angel Sappa; Arash Akbarinia | ||||
Title | Multispectral Single-Sensor RGB-NIR Imaging: New Challenges and Opportunities | Type | Conference Article | ||
Year | 2017 | Publication | 7th International Conference on Image Processing Theory, Tools & Applications | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Color restoration; Neural networks; Singlesensor cameras; Multispectral images; RGB-NIR dataset | ||||
Abstract | Multispectral images captured with a single sensor camera have become an attractive alternative for numerous computer vision applications. However, in order to fully exploit their potentials, the color restoration problem (RGB representation) should be addressed. This problem is more evident in outdoor scenarios containing vegetation, living beings, or specular materials. The problem of color distortion emerges from the sensitivity of sensors due to the overlap of visible and near infrared spectral bands. This paper empirically evaluates the variability of the near infrared (NIR) information with respect to the changes of light throughout the day. A tiny neural network is proposed to restore the RGB color representation from the given RGBN (Red, Green, Blue, NIR) images. In order to evaluate the proposed algorithm, different experiments on a RGBN outdoor dataset are conducted, which include various challenging cases. The obtained result shows the challenge and the importance of addressing color restoration in single sensor multispectral images. | ||||
Address | Montreal; Canada; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IPTA | ||
Notes | NEUROBIT; MSIAU; 600.122 | Approved | no | ||
Call Number ![]() |
Admin @ si @ SSA2017 | Serial | 3074 | ||
Permanent link to this record | |||||
Author | Lorenzo Seidenari; Giuseppe Serra; Andrew Bagdanov; Alberto del Bimbo | ||||
Title | Local pyramidal descriptors for image recognition | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 5 | Pages | 1033 - 1040 |
Keywords | Object categorization; local features; kernel methods | ||||
Abstract | In this paper we present a novel method to improve the flexibility of descriptor matching for image recognition by using local multiresolution
pyramids in feature space. We propose that image patches be represented at multiple levels of descriptor detail and that these levels be defined in terms of local spatial pooling resolution. Preserving multiple levels of detail in local descriptors is a way of hedging one’s bets on which levels will most relevant for matching during learning and recognition. We introduce the Pyramid SIFT (P-SIFT) descriptor and show that its use in four state-of-the-art image recognition pipelines improves accuracy and yields state-of-the-art results. Our technique is applicable independently of spatial pyramid matching and we show that spatial pyramids can be combined with local pyramids to obtain further improvement.We achieve state-of-the-art results on Caltech-101 (80.1%) and Caltech-256 (52.6%) when compared to other approaches based on SIFT features over intensity images. Our technique is efficient and is extremely easy to integrate into image recognition pipelines. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP; 600.079 | Approved | no | ||
Call Number ![]() |
Admin @ si @ SSB2014 | Serial | 2524 | ||
Permanent link to this record |