Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–20] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez | ||||
Title | Active Learning for Deep Detection Neural Networks | Type | Conference Article | ||
Year | 2019 | Publication | 18th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 3672-3680 | ||
Keywords | |||||
Abstract | The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection. | ||||
Address | Seul; Korea; October 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 | Approved | no | ||
Call Number ![]() |
Admin @ si @ AGW2019 | Serial | 3321 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Biologically Plausible Colour Naming Model | Type | Conference Article | ||
Year | 2015 | Publication | European Conference on Visual Perception ECVP2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Poster | ||||
Address | Liverpool; UK; August 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT; 600.068 | Approved | no | ||
Call Number ![]() |
Admin @ si @ AkP2015 | Serial | 2660 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Biologically plausible boundary detection | Type | Conference Article | ||
Year | 2016 | Publication | 27th British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on two benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods. | ||||
Address | York; UK; September 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | NEUROBIT; 600.068; 600.072 | Approved | no | ||
Call Number ![]() |
Admin @ si @ AkP2016a | Serial | 2867 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Dynamically Adjusted Surround Contrast Enhances Boundary Detection, European Conference on Visual Perception | Type | Conference Article | ||
Year | 2016 | Publication | European Conference on Visual Perception | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Barcelona; Spain; August 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT | Approved | no | ||
Call Number ![]() |
Admin @ si @ AkP2016b | Serial | 2900 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Y. LeCun; Theo Gevers; Antonio Lopez | ||||
Title | Semantic Road Segmentation via Multi-Scale Ensembles of Learned Features | Type | Conference Article | ||
Year | 2012 | Publication | 12th European Conference on Computer Vision – Workshops and Demonstrations | Abbreviated Journal | |
Volume | 7584 | Issue | Pages | 586-595 | |
Keywords | road detection | ||||
Abstract | Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual features such as color, depth, edges, SIFT or HoG. These models are trained to maximize the likelihood of the correct classification given a training set. However, these approaches rely on hand–designed features (e.g., texture, SIFT or HoG) and a higher computational time required in the inference process.
Therefore, in this paper, we focus on estimating the unary potentials of a conditional random field via ensembles of learned features. We propose an algorithm based on convolutional neural networks to learn local features from training data at different scales and resolutions. Then, diversification between these features is exploited using a weighted linear combination. Experiments on a publicly available database show the effectiveness of the proposed method to perform semantic road scene segmentation in still images. The algorithm outperforms appearance based methods and its performance is similar compared to state–of–the–art methods using other sources of information such as depth, motion or stereo. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-33867-0 | Medium | |
Area | Expedition | Conference | ECCVW | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number ![]() |
Admin @ si @ ALG2012; ADAS @ adas | Serial | 2187 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol | ||||
Title | Manuscript text line detection and segmentation using second-order derivatives analysis | Type | Conference Article | ||
Year | 2018 | Publication | 13th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 293 - 298 | ||
Keywords | text line detection; text line segmentation; text region detection; second-order derivatives | ||||
Abstract | In this paper, we explore the use of second-order derivatives to detect text lines on handwritten document images. Taking advantage that the second derivative gives a minimum response when a dark linear element over a
bright background has the same orientation as the filter, we use this operator to create a map with the local orientation and strength of putative text lines in the document. Then, we detect line segments by selecting and merging the filter responses that have a similar orientation and scale. Finally, text lines are found by merging the segments that are within the same text region. The proposed segmentation algorithm, is learning-free while showing a performance similar to the state of the art methods in publicly available datasets. |
||||
Address | Viena; Austria; April 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.084; 600.129; 302.065; 600.121 | Approved | no | ||
Call Number ![]() |
Admin @ si @ AlR2018a | Serial | 3104 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol | ||||
Title | Synthetically generated semantic codebook for Bag-of-Visual-Words based word spotting | Type | Conference Article | ||
Year | 2018 | Publication | 13th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 223 - 228 | ||
Keywords | Word Spotting; Bag of Visual Words; Synthetic Codebook; Semantic Information | ||||
Abstract | Word-spotting methods based on the Bag-ofVisual-Words framework have demonstrated a good retrieval performance even when used in a completely unsupervised manner. Although unsupervised approaches are suitable for
large document collections due to the cost of acquiring labeled data, these methods also present some drawbacks. For instance, having to train a suitable “codebook” for a certain dataset has a high computational cost. Therefore, in this paper we present a database agnostic codebook which is trained from synthetic data. The aim of the proposed approach is to generate a codebook where the only information required is the type of script used in the document. The use of synthetic data also allows to easily incorporate semantic information in the codebook generation. So, the proposed method is able to determine which set of codewords have a semantic representation of the descriptor feature space. Experimental results show that the resulting codebook attains a state-of-the-art performance while having a more compact representation. |
||||
Address | Viena; Austria; April 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.084; 600.129; 600.121 | Approved | no | ||
Call Number ![]() |
Admin @ si @ AlR2018b | Serial | 3105 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Felipe Lumbreras; Angel Sappa | ||||
Title | A General-purpose Crowdsourcing Platform for Mobile Devices | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 211-215 | |
Keywords | Crowdsourcing Platform; Mobile Crowdsourcing | ||||
Abstract | This paper presents details of a general purpose micro-task on-demand platform based on the crowdsourcing philosophy. This platform was specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquity and iii) embedded sensors. The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks. Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and tasksolver). Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way. Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications. Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform. | ||||
Address | Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ISE; ADAS; 600.054; 600.055; 600.076; 600.078 | Approved | no | ||
Call Number ![]() |
Admin @ si @ ALS2014 | Serial | 2478 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Bhalaji Nagarajan; Rupali Khatun; Marc Bolaños; Petia Radeva | ||||
Title | Uncertainty Modeling and Deep Learning Applied to Food Image Analysis | Type | Conference Article | ||
Year | 2020 | Publication | 13th International Joint Conference on Biomedical Engineering Systems and Technologies | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Recently, computer vision approaches specially assisted by deep learning techniques have shown unexpected advancements that practically solve problems that never have been imagined to be automatized like face recognition or automated driving. However, food image recognition has received a little effort in the Computer Vision community. In this project, we review the field of food image analysis and focus on how to combine with two challenging research lines: deep learning and uncertainty modeling. After discussing our methodology to advance in this direction, we comment potential research, social and economic impact of the research on food image analysis. | ||||
Address | Villetta; Malta; February 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BIODEVICES | ||
Notes | MILAB | Approved | no | ||
Call Number ![]() |
Admin @ si @ ANK2020 | Serial | 3526 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga; Marta Exposito; Bogdan Raducanu; Xavier Otazu | ||||
Title | Can biological solutions help computers detect symmetry? | Type | Conference Article | ||
Year | 2017 | Publication | 40th European Conference on Visual Perception | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Berlin; Germany; August 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT | Approved | no | ||
Call Number ![]() |
Admin @ si @ APE2017 | Serial | 2995 | ||
Permanent link to this record | |||||
Author | David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo | ||||
Title | Fast and Robust Object Segmentation with the Integral Linear Classifier | Type | Conference Article | ||
Year | 2010 | Publication | 23rd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1046–1053 | ||
Keywords | |||||
Abstract | We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets. | ||||
Address | San Francisco; CA; USA; June 2010 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-6984-0 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | ADAS | Approved | no | ||
Call Number ![]() |
Admin @ si @ ARL2010a | Serial | 1311 | ||
Permanent link to this record | |||||
Author | David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo | ||||
Title | Real-time Object Segmentation using a Bag of Features Approach | Type | Conference Article | ||
Year | 2010 | Publication | 13th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | 220 | Issue | Pages | 321–329 | |
Keywords | Object Segmentation; Bag Of Features; Feature Quantization; Densely sampled descriptors | ||||
Abstract | In this paper, we propose an object segmentation framework, based on the popular bag of features (BoF), which can process several images per second while achieving a good segmentation accuracy assigning an object category to every pixel of the image. We propose an efficient color descriptor to complement the information obtained by a typical gradient-based local descriptor. Results show that color proves to be a useful cue to increase the segmentation accuracy, specially in large homogeneous regions. Then, we extend the Hierarchical K-Means codebook using the recently proposed Vector of Locally Aggregated Descriptors method. Finally, we show that the BoF method can be easily parallelized since it is applied locally, thus the time necessary to process an image is further reduced. The performance of the proposed method is evaluated in the standard PASCAL 2007 Segmentation Challenge object segmentation dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IOS Press Amsterdam, | Place of Publication | Editor | In R.Alquezar, A.Moreno, J.Aguilar. | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 9781607506423 | Medium | ||
Area | Expedition | Conference | CCIA | ||
Notes | ADAS | Approved | no | ||
Call Number ![]() |
Admin @ si @ ARL2010b | Serial | 1417 | ||
Permanent link to this record | |||||
Author | Joan Arnedo-Moreno; Agata Lapedriza | ||||
Title | Visualizing key authenticity: turning your face into your public key | Type | Conference Article | ||
Year | 2010 | Publication | 6th China International Conference on Information Security and Cryptology | Abbreviated Journal | |
Volume | Issue | Pages | 605-618 | ||
Keywords | |||||
Abstract | Biometric information has become a technology complementary to cryptography, allowing to conveniently manage cryptographic data. Two important needs are ful lled: rst of all, making such data always readily available, and additionally, making its legitimate owner easily identi able. In this work we propose a signature system which integrates face recognition biometrics with and identity-based signature scheme, so the user's face e ectively becomes his public key and system ID. Thus, other users may verify messages using photos of the claimed sender, providing a reasonable trade-o between system security and usability, as well as a much more straightforward public key authenticity and distribution process. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | Inscrypt | ||
Notes | OR;MV | Approved | no | ||
Call Number ![]() |
Admin @ si @ ArL2010c | Serial | 2149 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Bogdan Raducanu; Petia Radeva; Joost Van de Weijer | ||||
Title | Continual Evidential Deep Learning for Out-of-Distribution Detection | Type | Conference Article | ||
Year | 2023 | Publication | IEEE/CVF International Conference on Computer Vision (ICCV) Workshops -Visual Continual Learning workshop | Abbreviated Journal | |
Volume | Issue | Pages | 3444-3454 | ||
Keywords | |||||
Abstract | Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions. Evidential deep learning stands out achieving remarkable performance in detecting out-of-distribution (OOD) data with a single deterministic neural network. Motivated by this fact, in this paper we propose the integration of an evidential deep learning method into a continual learning framework in order to perform simultaneously incremental object classification and OOD detection. Moreover, we analyze the ability of vacuity and dissonance to differentiate between in-distribution data belonging to old classes and OOD data. The proposed method, called CEDL, is evaluated on CIFAR-100 considering two settings consisting of 5 and 10 tasks, respectively. From the obtained results, we could appreciate that the proposed method, in addition to provide comparable results in object classification with respect to the baseline, largely outperforms OOD detection compared to several posthoc methods on three evaluation metrics: AUROC, AUPR and FPR95. | ||||
Address | Paris; France; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | LAMP; MILAB | Approved | no | ||
Call Number ![]() |
Admin @ si @ ARR2023 | Serial | 3841 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Bogdan Raducanu; Petia Radeva; Joost Van de Weijer | ||||
Title | Continual Evidential Deep Learning for Out-of-Distribution Detection | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops | Abbreviated Journal | |
Volume | Issue | Pages | 3444-3454 | ||
Keywords | |||||
Abstract | Uncertainty-based deep learning models have attracted a great deal of interest for their ability to provide accurate and reliable predictions. Evidential deep learning stands out achieving remarkable performance in detecting out-ofdistribution (OOD) data with a single deterministic neural network. Motivated by this fact, in this paper we propose the integration of an evidential deep learning method into a continual learning framework in order to perform simultaneously incremental object classification and OOD detection. Moreover, we analyze the ability of vacuity and dissonance to differentiate between in-distribution data belonging to old classes and OOD data. The proposed method 1, called CEDL, is evaluated on CIFAR-100 considering two settings consisting of 5 and 10 tasks, respectively. From the obtained results, we could appreciate that the proposed method, in addition to provide comparable results in object classification with respect to the baseline, largely outperforms OOD detection compared to several posthoc methods on three evaluation metrics: AUROC, AUPR and FPR95. | ||||
Address | Paris; France; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | LAMP; MILAB | Approved | no | ||
Call Number ![]() |
Admin @ si @ ARR2023 | Serial | 3974 | ||
Permanent link to this record |