|
Records |
Links |
|
Author |
Robert Benavente; Gemma Sanchez; Ramon Baldrich; Maria Vanrell; Josep Llados |
|
|
Title |
Normalized colour segmentation for human appearance description. |
Type |
Conference Article |
|
Year |
2000 |
Publication |
15 th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
3 |
Issue |
|
Pages |
637-641 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Barcelona. |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
DAG;CIC |
Approved |
no |
|
|
Call Number |
CAT @ cat @ BSB2000 |
Serial |
223 |
|
Permanent link to this record |
|
|
|
|
Author |
Susana Alvarez; Anna Salvatella; Maria Vanrell; Xavier Otazu |
|
|
Title |
Perceptual color texture codebooks for retrieving in highly diverse texture datasets |
Type |
Conference Article |
|
Year |
2010 |
Publication |
20th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
866–869 |
|
|
Keywords |
|
|
|
Abstract |
Color and texture are visual cues of different nature, their integration in a useful visual descriptor is not an obvious step. One way to combine both features is to compute texture descriptors independently on each color channel. A second way is integrate the features at a descriptor level, in this case arises the problem of normalizing both cues. A significant progress in the last years in object recognition has provided the bag-of-words framework that again deals with the problem of feature combination through the definition of vocabularies of visual words. Inspired in this framework, here we present perceptual textons that will allow to fuse color and texture at the level of p-blobs, which is our feature detection step. Feature representation is based on two uniform spaces representing the attributes of the p-blobs. The low-dimensionality of these text on spaces will allow to bypass the usual problems of previous approaches. Firstly, no need for normalization between cues; and secondly, vocabularies are directly obtained from the perceptual properties of text on spaces without any learning step. Our proposal improve current state-of-art of color-texture descriptors in an image retrieval experiment over a highly diverse texture dataset from Corel. |
|
|
Address |
Istanbul (Turkey) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
978-1-4244-7542-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
CAT @ cat @ ASV2010b |
Serial |
1426 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Michael Felsberg |
|
|
Title |
Scale Coding Bag-of-Words for Action Recognition |
Type |
Conference Article |
|
Year |
2014 |
Publication |
22nd International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1514-1519 |
|
|
Keywords |
|
|
|
Abstract |
Recognizing human actions in still images is a challenging problem in computer vision due to significant amount of scale, illumination and pose variation. Given the bounding box of a person both at training and test time, the task is to classify the action associated with each bounding box in an image.
Most state-of-the-art methods use the bag-of-words paradigm for action recognition. The bag-of-words framework employing a dense multi-scale grid sampling strategy is the de facto standard for feature detection. This results in a scale invariant image representation where all the features at multiple-scales are binned in a single histogram. We argue that such a scale invariant
strategy is sub-optimal since it ignores the multi-scale information
available with each bounding box of a person.
This paper investigates alternative approaches to scale coding for action recognition in still images. We encode multi-scale information explicitly in three different histograms for small, medium and large scale visual-words. Our first approach exploits multi-scale information with respect to the image size. In our second approach, we encode multi-scale information relative to the size of the bounding box of a person instance. In each approach, the multi-scale histograms are then concatenated into a single representation for action classification. We validate our approaches on the Willow dataset which contains seven action categories: interacting with computer, photography, playing music,
riding bike, riding horse, running and walking. Our results clearly suggest that the proposed scale coding approaches outperform the conventional scale invariant technique. Moreover, we show that our approach obtains promising results compared to more complex state-of-the-art methods. |
|
|
Address |
Stockholm; August 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
CIC; LAMP; 601.240; 600.074; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KWB2014 |
Serial |
2450 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Lopez; J. Hilgenstock; A. Busse; Ramon Baldrich; Felipe Lumbreras; Joan Serrat |
|
|
Title |
Temporal Coherence Analysis for Intelligent Headlight Control |
Type |
Miscellaneous |
|
Year |
2008 |
Publication |
2nd Workshop on Perception, Planning and Navigation for Intelligent Vehicles |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
59–64 |
|
|
Keywords |
Intelligent Headlights |
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IROS |
|
|
Notes |
ADAS;CIC |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ LHB2008b |
Serial |
1112 |
|
Permanent link to this record |
|
|
|
|
Author |
Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich |
|
|
Title |
DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition |
Type |
Conference Article |
|
Year |
2015 |
Publication |
Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II |
Abbreviated Journal |
|
|
|
Volume |
9475 |
Issue |
|
Pages |
463-473 |
|
|
Keywords |
Projector-camera systems; Feature descriptors; Object recognition |
|
|
Abstract |
Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer International Publishing |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-319-27862-9 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ISVC |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ SMG2015 |
Serial |
2736 |
|
Permanent link to this record |
|
|
|
|
Author |
Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras |
|
|
Title |
Light Direction and Color Estimation from Single Image with Deep Regression |
Type |
Conference Article |
|
Year |
2020 |
Publication |
London Imaging Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes. |
|
|
Address |
Virtual; September 2020 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
LIM |
|
|
Notes |
CIC; 600.118; 600.140; |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBV2020 |
Serial |
3460 |
|
Permanent link to this record |
|
|
|
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell |
|
|
Title |
Portmanteau Vocabularies for Multi-Cue Image Representation |
Type |
Conference Article |
|
Year |
2011 |
Publication |
25th Annual Conference on Neural Information Processing Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPS |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ KWB2011 |
Serial |
1865 |
|
Permanent link to this record |
|
|
|
|
Author |
Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin |
|
|
Title |
Towards Automatic Concept Transfer |
Type |
Conference Article |
|
Year |
2011 |
Publication |
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
167.176 |
|
|
Keywords |
chromatic modeling, color concepts, color transfer, concept transfer |
|
|
Abstract |
This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
ACM Press |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-1-4503-0907-3 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NPAR |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ MSM2011 |
Serial |
1866 |
|
Permanent link to this record |
|
|
|
|
Author |
Felipe Lumbreras; Xavier Roca; Daniel Ponsa; Robert Benavente; Judit Martinez; Silvia Sanchez; Coen Antens; Juan J. Villanueva |
|
|
Title |
Visual Inspection of Safety Belts |
Type |
Conference Article |
|
Year |
2001 |
Publication |
International Conference on Quality Control by Artificial Vision |
Abbreviated Journal |
|
|
|
Volume |
2 |
Issue |
|
Pages |
526–531 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
France |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
QCAV |
|
|
Notes |
ADAS;ISE;CIC |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ LRP2001 |
Serial |
122 |
|
Permanent link to this record |
|
|
|
|
Author |
Christophe Rigaud; Dimosthenis Karatzas; Joost Van de Weijer; Jean-Christophe Burie; Jean-Marc Ogier |
|
|
Title |
Automatic text localisation in scanned comic books |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Proceedings of the International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
814-819 |
|
|
Keywords |
Text localization; comics; text/graphic separation; complex background; unstructured document |
|
|
Abstract |
Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent document understanding enable direct content-based search as opposed to metadata only search (e.g. album title or author name). Few studies have been done in this direction. In this work we detail a novel approach for the automatic text localization in scanned comics book pages, an essential step towards a fully automatic comics book understanding. We focus on speech text as it is semantically important and represents the majority of the text present in comics. The approach is compared with existing methods of text localization found in the literature and results are presented. |
|
|
Address |
Barcelona; February 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
DAG; CIC; 600.056 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RKW2013b |
Serial |
2261 |
|
Permanent link to this record |
|
|
|
|
Author |
Joost Van de Weijer; Shida Beigpour |
|
|
Title |
The Dichromatic Reflection Model: Future Research Directions and Applications |
Type |
Conference Article |
|
Year |
2011 |
Publication |
International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
dblp |
|
|
Abstract |
The dichromatic reflection model (DRM) predicts that color distributions form a parallelogram in color space, whose shape is defined by the body reflectance and the illuminant color. In this paper we resume the assumptions which led to the DRM and shortly recall two of its main applications domains: color image segmentation and photometric invariant feature computation. After having introduced the model we discuss several limitations of the theory, especially those which are raised once working on real-world uncalibrated images. In addition, we summerize recent extensions of the model which allow to handle more complicated light interactions. Finally, we suggest some future research directions which would further extend its applicability. |
|
|
Address |
Algarve, Portugal |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
SciTePress |
Place of Publication |
|
Editor |
Mestetskiy, Leonid and Braz, José |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-989-8425-47-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ WeB2011 |
Serial |
1778 |
|
Permanent link to this record |
|
|
|
|
Author |
Bojana Gajic; Eduard Vazquez; Ramon Baldrich |
|
|
Title |
Evaluation of Deep Image Descriptors for Texture Retrieval |
Type |
Conference Article |
|
Year |
2017 |
Publication |
Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
251-257 |
|
|
Keywords |
Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation |
|
|
Abstract |
The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures. |
|
|
Address |
Porto, Portugal; 27 February – 1 March 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
CIC; 600.087 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3710 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcos V Conde; Florin Vasluianu; Javier Vazquez; Radu Timofte |
|
|
Title |
Perceptual image enhancement for smartphone real-time applications |
Type |
Conference Article |
|
Year |
2023 |
Publication |
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1848-1858 |
|
|
Keywords |
|
|
|
Abstract |
Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones. |
|
|
Address |
Waikoloa; Hawai; USA; January 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WACV |
|
|
Notes |
MACO; CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ CVV2023 |
Serial |
3900 |
|
Permanent link to this record |