|
Records |
Links |
|
Author |
Dani Rowe; Jordi Gonzalez; Ivan Huerta; Juan J. Villanueva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
On Reasoning over Tracking Events |
Type |
Conference Article |
|
Year |
2007 |
Publication |
15th Scandinavian Conference on Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
4522 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
502–511 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Aalborg (Denmark) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SCIA´07 |
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
ISE @ ise @ RGH2007 |
Serial |
784 |
|
Permanent link to this record |
|
|
|
|
Author |
Ernest Valveny; Ricardo Toledo; Ramon Baldrich; Enric Marti |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching |
Type |
Conference Article |
|
Year |
2002 |
Publication |
Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
502–507 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG;RV;CAT;IAM;CIC;ADAS |
Approved |
no |
|
|
Call Number |
IAM @ iam @ VTB2002 |
Serial |
1660 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Rodriguez; Diego Velazquez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Pay attention to the activations: a modular attention mechanism for fine-grained image recognition |
Type |
Journal Article |
|
Year |
2020 |
Publication |
IEEE Transactions on Multimedia |
Abbreviated Journal |
TMM |
|
|
Volume |
22 |
Issue |
2 |
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
502-514 |
|
|
Keywords |
|
|
|
Abstract |
Fine-grained image recognition is central to many multimedia tasks such as search, retrieval, and captioning. Unfortunately, these tasks are still challenging since the appearance of samples of the same class can be more different than those from different classes. This issue is mainly due to changes in deformation, pose, and the presence of clutter. In the literature, attention has been one of the most successful strategies to handle the aforementioned problems. Attention has been typically implemented in neural networks by selecting the most informative regions of the image that improve classification. In contrast, in this paper, attention is not applied at the image level but to the convolutional feature activations. In essence, with our approach, the neural model learns to attend to lower-level feature activations without requiring part annotations and uses those activations to update and rectify the output likelihood distribution. The proposed mechanism is modular, architecture-independent, and efficient in terms of both parameters and computation required. Experiments demonstrate that well-known networks such as wide residual networks and ResNeXt, when augmented with our approach, systematically improve their classification accuracy and become more robust to changes in deformation and pose and to the presence of clutter. As a result, our proposal reaches state-of-the-art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford Dogs, and UEC-Food100 while obtaining competitive performance in ImageNet, CIFAR-100, CUB200 Birds, and Stanford Cars. In addition, we analyze the different components of our model, showing that the proposed attention modules succeed in finding the most discriminative regions of the image. Finally, as a proof of concept, we demonstrate that with only local predictions, an augmented neural network can successfully classify an image before reaching any fully connected layer, thus reducing the computational amount up to 10%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE; 600.119; 600.098 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RVC2020a |
Serial |
3417 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; David Aldavert; Ricardo Toledo; Josep Llados |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Towards Query-by-Speech Handwritten Keyword Spotting |
Type |
Conference Article |
|
Year |
2015 |
Publication |
13th International Conference on Document Analysis and Recognition ICDAR2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
501-505 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we present a new querying paradigm for handwritten keyword spotting. We propose to represent handwritten word images both by visual and audio representations, enabling a query-by-speech keyword spotting system. The two representations are merged together and projected to a common sub-space in the training phase. This transform allows to, given a spoken query, retrieve word instances that were only represented by the visual modality. In addition, the same method can be used backwards at no additional cost to produce a handwritten text-tospeech system. We present our first results on this new querying mechanism using synthetic voices over the George Washington
dataset. |
|
|
Address |
Nancy; France; August 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICDAR |
|
|
Notes |
DAG; 600.084; 600.061; 601.223; 600.077;ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RAT2015b |
Serial |
2682 |
|
Permanent link to this record |
|
|
|
|
Author |
Arturo Fuentes; F. Javier Sanchez; Thomas Voncina; Jorge Bernal |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
LAMV: Learning to Predict Where Spectators Look in Live Music Performances |
Type |
Conference Article |
|
Year |
2021 |
Publication |
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
5 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
500-507 |
|
|
Keywords |
|
|
|
Abstract |
The advent of artificial intelligence has supposed an evolution on how different daily work tasks are performed. The analysis of cultural content has seen a huge boost by the development of computer-assisted methods that allows easy and transparent data access. In our case, we deal with the automation of the production of live shows, like music concerts, aiming to develop a system that can indicate the producer which camera to show based on what each of them is showing. In this context, we consider that is essential to understand where spectators look and what they are interested in so the computational method can learn from this information. The work that we present here shows the results of a first preliminary study in which we compare areas of interest defined by human beings and those indicated by an automatic system. Our system is based on the extraction of motion textures from dynamic Spatio-Temporal Volumes (STV) and then analyzing the patterns by means of texture analysis techniques. We validate our approach over several video sequences that have been labeled by 16 different experts. Our method is able to match those relevant areas identified by the experts, achieving recall scores higher than 80% when a distance of 80 pixels between method and ground truth is considered. Current performance shows promise when detecting abnormal peaks and movement trends. |
|
|
Address |
Virtual; February 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
MV; ISE; 600.119; |
Approved |
no |
|
|
Call Number |
Admin @ si @ FSV2021 |
Serial |
3570 |
|
Permanent link to this record |
|
|
|
|
Author |
Carlo Gatta; Adriana Romero; Joost Van de Weijer |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Unrolling loopy top-down semantic feedback in convolutional deep networks |
Type |
Conference Article |
|
Year |
2014 |
Publication |
Workshop on Deep Vision: Deep Learning for Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
498-505 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterised by an efficient training and a sufficiently fast testing. We use the well known SIFTflow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches. |
|
|
Address |
Columbus; Ohio; June 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
LAMP; MILAB; 601.160; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GRW2014 |
Serial |
2490 |
|
Permanent link to this record |
|
|
|
|
Author |
Mathieu Nicolas Delalandre; Ernest Valveny; Josep Llados |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Performance Evaluation of Symbol Recognition and Spotting Systems |
Type |
Conference Article |
|
Year |
2008 |
Publication |
Proceedings of the 8th International Workshop on Document Analysis Systems, |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
497–505 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Nara (Japan) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DAS |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ DVL2008b |
Serial |
1060 |
|
Permanent link to this record |
|
|
|
|
Author |
Josep Llados; Jaime Lopez-Krahe; Enric Marti |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Hand drawn document understanding using the straight line Hough transform and graph matching |
Type |
Conference Article |
|
Year |
1996 |
Publication |
Proceedings of the 13th International Pattern Recognition Conference (ICPR’96) |
Abbreviated Journal |
|
|
|
Volume |
2 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
497-501 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a system to understand hand drawn architectural drawings in a CAD environment. The procedure is to identify in a floor plan the building elements, stored in a library of patterns, and their spatial relationships. The vectorized input document and the patterns to recognize are represented by attributed graphs. To recognize the patterns as such, we apply a structural approach based on subgraph isomorphism techniques. In spite of their value, graph matching techniques do not recognize adequately those building elements characterized by hatching patterns, i.e. walls. Here we focus on the recognition of hatching patterns and develop a straight line Hough transform based method in order to detect the regions filled in with parallel straight fines. This allows not only to recognize filling patterns, but it actually reduces the computational load associated with the subgraph isomorphism computation. The result is that the document can be redrawn by editing all the patterns recognized |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
Vienna , Austria |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG;IAM; |
Approved |
no |
|
|
Call Number |
IAM @ iam @ LLM1996 |
Serial |
1579 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Alicia Fornes; Oriol Pujol; Josep Llados; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Circular Blurred Shape Model for Multiclass Symbol Recognition |
Type |
Journal Article |
|
Year |
2011 |
Publication |
IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) |
Abbreviated Journal |
TSMCB |
|
|
Volume |
41 |
Issue |
2 |
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
497-506 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1083-4419 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; DAG;HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ EFP2011 |
Serial |
1784 |
|
Permanent link to this record |
|
|
|
|
Author |
Daniela Rato; Miguel Oliveira; Vitor Santos; Manuel Gomes; Angel Sappa |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells |
Type |
Journal Article |
|
Year |
2022 |
Publication |
Journal of Manufacturing Systems |
Abbreviated Journal |
JMANUFSYST |
|
|
Volume |
64 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
497-507 |
|
|
Keywords |
Calibration; Collaborative cell; Multi-modal; Multi-sensor |
|
|
Abstract |
Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Science Direct |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; MACO |
Approved |
no |
|
|
Call Number |
Admin @ si @ ROS2022 |
Serial |
3750 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez; Carlo Gatta; Sergio Escalera; Laura Igual; Victoria Martin Yuste; Petia Radeva |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Accurate and Robust Fully-Automatic QCA: Method and Numerical Validation |
Type |
Conference Article |
|
Year |
2011 |
Publication |
14th International Conference on Medical Image Computing and Computer Assisted Intervention |
Abbreviated Journal |
|
|
|
Volume |
14 |
Issue |
3 |
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
496-503 |
|
|
Keywords |
|
|
|
Abstract |
The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%. |
|
|
Address |
Toronto, Canada |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-3-642-23625-9 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAI |
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ HGE2011 |
Serial |
1769 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Sergio Escalera; Xavier Baro |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th ACM International Conference on Multimodal Interaction |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
495-502 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions. |
|
|
Address |
Sidney; Australia; December 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-4503-2129-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICMI |
|
|
Notes |
HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PEB2013 |
Serial |
2488 |
|
Permanent link to this record |
|
|
|
|
Author |
Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Thermal Image Super-Resolution: A Novel Unsupervised Approach |
Type |
Conference Article |
|
Year |
2022 |
Publication |
International Joint Conference on Computer Vision, Imaging and Computer Graphics |
Abbreviated Journal |
|
|
|
Volume |
1474 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
495–506 |
|
|
Keywords |
|
|
|
Abstract |
This paper proposes the use of a CycleGAN architecture for thermal image super-resolution under a transfer domain strategy, where middle-resolution images from one camera are transferred to a higher resolution domain of another camera. The proposed approach is trained with a large dataset acquired using three thermal cameras at different resolutions. An unsupervised learning process is followed to train the architecture. Additional loss function is proposed trying to improve results from the state of the art approaches. Following the first thermal image super-resolution challenge (PBVS-CVPR2020) evaluations are performed. A comparison with previous works is presented showing the proposed approach reaches the best results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
MSIAU; 600.130 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSV2022d |
Serial |
3776 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Oriol Pujol; Petia Radeva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Sub-Class Error-Correcting Output Codes |
Type |
Book Chapter |
|
Year |
2008 |
Publication |
Computer Vision Systems. 6th International Conference |
Abbreviated Journal |
|
|
|
Volume |
5008 |
Issue |
|
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
494–504 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Santorini (Greece) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICVS |
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EPR2008c |
Serial |
963 |
|
Permanent link to this record |
|
|
|
|
Author |
Noha Elfiky; Jordi Gonzalez; Xavier Roca |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Compact and Adaptive Spatial Pyramids for Scene Recognition |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
|
|
Volume |
30 |
Issue |
8 |
Pages ![sorted by First Page field, descending order (down)](img/sort_desc.gif) |
492–500 |
|
|
Keywords |
|
|
|
Abstract |
Most successful approaches on scenerecognition tend to efficiently combine global image features with spatial local appearance and shape cues. On the other hand, less attention has been devoted for studying spatial texture features within scenes. Our method is based on the insight that scenes can be seen as a composition of micro-texture patterns. This paper analyzes the role of texture along with its spatial layout for scenerecognition. However, one main drawback of the resulting spatial representation is its huge dimensionality. Hence, we propose a technique that addresses this problem by presenting a compactSpatialPyramid (SP) representation. The basis of our compact representation, namely, CompactAdaptiveSpatialPyramid (CASP) consists of a two-stages compression strategy. This strategy is based on the Agglomerative Information Bottleneck (AIB) theory for (i) compressing the least informative SP features, and, (ii) automatically learning the most appropriate shape for each category. Our method exceeds the state-of-the-art results on several challenging scenerecognition data sets. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ EGR2012 |
Serial |
2004 |
|
Permanent link to this record |