Home | [111–120] << 121 122 123 124 125 126 127 128 129 130 >> [131–140] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Debora Gil; Jaume Garcia; Aura Hernandez-Sabate; Enric Marti | ||||
Title | Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy | Type | Conference Article | ||
Year | 2010 | Publication | 8th Medical Imaging | Abbreviated Journal | |
Volume | 7623 | Issue | 762304 | Pages | 304 |
Keywords | |||||
Abstract ![]() |
Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | SPIE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SPIE | ||
Notes | IAM | Approved | no | ||
Call Number | IAM @ iam @ GGH2010a | Serial | 1522 | ||
Permanent link to this record | |||||
Author | Marta Ligero; Guillermo Torres; Carles Sanchez; Katerine Diaz; Raquel Perez; Debora Gil | ||||
Title | Selection of Radiomics Features based on their Reproducibility | Type | Conference Article | ||
Year | 2019 | Publication | 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society | Abbreviated Journal | |
Volume | Issue | Pages | 403-408 | ||
Keywords | |||||
Abstract ![]() |
Dimensionality reduction is key to alleviate machine learning artifacts in clinical applications with Small Sample Size (SSS) unbalanced datasets. Existing methods rely on either the probabilistic distribution of training data or the discriminant power of the reduced space, disregarding the impact of repeatability and uncertainty in features.In the present study is proposed the use of reproducibility of radiomics features to select features with high inter-class correlation coefficient (ICC). The reproducibility includes the variability introduced in the image acquisition, like medical scans acquisition parameters and convolution kernels, that affects intensity-based features and tumor annotations made by physicians, that influences morphological descriptors of the lesion.For the reproducibility of radiomics features three studies were conducted on cases collected at Vall Hebron Oncology Institute (VHIO) on responders to oncology treatment. The studies focused on the variability due to the convolution kernel, image acquisition parameters, and the inter-observer lesion identification. The features selected were those features with a ICC higher than 0.7 in the three studies.The selected features based on reproducibility were evaluated for lesion malignancy classification using a different database. Results show better performance compared to several state-of-the-art methods including Principal Component Analysis (PCA), Kernel Discriminant Analysis via QR decomposition (KDAQR), LASSO, and an own built Convolutional Neural Network. | ||||
Address | Berlin; Alemanya; July 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | EMBC | ||
Notes | IAM; 600.139; 600.145 | Approved | no | ||
Call Number | Admin @ si @ LTS2019 | Serial | 3358 | ||
Permanent link to this record | |||||
Author | Sounak Dey; Anguelos Nicolaou; Josep Llados; Umapada Pal | ||||
Title | Local Binary Pattern for Word Spotting in Handwritten Historical Document | Type | Conference Article | ||
Year | 2016 | Publication | Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) | Abbreviated Journal | |
Volume | Issue | Pages | 574-583 | ||
Keywords | Local binary patterns; Spatial sampling; Learning-free; Word spotting; Handwritten; Historical document analysis; Large-scale data | ||||
Abstract ![]() |
Digital libraries store images which can be highly degraded and to index this kind of images we resort to word spotting as our information retrieval system. Information retrieval for handwritten document images is more challenging due to the difficulties in complex layout analysis, large variations of writing styles, and degradation or low quality of historical manuscripts. This paper presents a simple innovative learning-free method for word spotting from large scale historical documents combining Local Binary Pattern (LBP) and spatial sampling. This method offers three advantages: firstly, it operates in completely learning free paradigm which is very different from unsupervised learning methods, secondly, the computational time is significantly low because of the LBP features, which are very fast to compute, and thirdly, the method can be used in scenarios where annotations are not available. Finally, we compare the results of our proposed retrieval method with other methods in the literature and we obtain the best results in the learning free paradigm. | ||||
Address | Merida; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | S+SSPR | ||
Notes | DAG; 600.097; 602.006; 603.053 | Approved | no | ||
Call Number | Admin @ si @ DNL2016 | Serial | 2876 | ||
Permanent link to this record | |||||
Author | Idoia Ruiz; Joan Serrat | ||||
Title | Rank-based ordinal classification | Type | Conference Article | ||
Year | 2020 | Publication | 25th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 8069-8076 | ||
Keywords | |||||
Abstract ![]() |
Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset. |
||||
Address | Virtual; January 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | ADAS; 600.118; 600.124 | Approved | no | ||
Call Number | Admin @ si @ RuS2020 | Serial | 3549 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; Robert Benavente; Maria Vanrell | ||||
Title | Naming constraints constancy | Type | Conference Article | ||
Year | 2012 | Publication | 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
Different studies have shown that languages from industrialized cultures
share a set of 11 basic colour terms: red, green, blue, yellow, pink, purple, brown, orange, black, white, and grey (Berlin & Kay, 1969, Basic Color Terms, University of California Press)( Kay & Regier, 2003, PNAS, 100, 9085-9089). Some of these studies have also reported the best representatives or focal values of each colour (Boynton and Olson, 1990, Vision Res. 30,1311–1317), (Sturges and Whitfield, 1995, CRA, 20:6, 364–376). Some further studies have provided us with fuzzy datasets for color naming by asking human observers to rate colours in terms of membership values (Benavente -et al-, 2006, CRA. 31:1, 48–56,). Recently, a computational model based on these human ratings has been developed (Benavente -et al-, 2008, JOSA-A, 25:10, 2582-2593). This computational model follows a fuzzy approach to assign a colour name to a particular RGB value. For example, a pixel with a value (255,0,0) will be named 'red' with membership 1, while a cyan pixel with a RGB value of (0, 200, 200) will be considered to be 0.5 green and 0.5 blue. In this work, we show how this colour naming paradigm can be applied to different computer vision tasks. In particular, we report results in colour constancy (Vazquez-Corral -et al-, 2012, IEEE TIP, in press) showing that the classical constraints on either illumination or surface reflectance can be substituted by the statistical properties encoded in the colour names. [Supported by projects TIN2010-21771-C02-1, CSD2007-00018]. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AV A | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ VBV2012 | Serial | 2131 | ||
Permanent link to this record | |||||
Author | Xavier Perez Sala; Laura Igual; Sergio Escalera; Cecilio Angulo | ||||
Title | Uniform Sampling of Rotations for Discrete and Continuous Learning of 2D Shape Models | Type | Book Chapter | ||
Year | 2012 | Publication | Vision Robotics: Technologies for Machine Learning and Vision Applications | Abbreviated Journal | |
Volume | Issue | 2 | Pages | 23-42 | |
Keywords | |||||
Abstract ![]() |
Different methodologies of uniform sampling over the rotation group, SO(3), for building unbiased 2D shape models from 3D objects are introduced and reviewed in this chapter. State-of-the-art non uniform sampling approaches are discussed, and uniform sampling methods using Euler angles and quaternions are introduced. Moreover, since presented work is oriented to model building applications, it is not limited to general discrete methods to obtain uniform 3D rotations, but also from a continuous point of view in the case of Procrustes Analysis. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IGI-Global | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ PIE2012 | Serial | 2064 | ||
Permanent link to this record | |||||
Author | Estefania Talavera; Carolin Wuerich; Nicolai Petkov; Petia Radeva | ||||
Title | Topic modelling for routine discovery from egocentric photo-streams | Type | Journal Article | ||
Year | 2020 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 104 | Issue | Pages | 107330 | |
Keywords | Routine; Egocentric vision; Lifestyle; Behaviour analysis; Topic modelling | ||||
Abstract ![]() |
Developing tools to understand and visualize lifestyle is of high interest when addressing the improvement of habits and well-being of people. Routine, defined as the usual things that a person does daily, helps describe the individuals’ lifestyle. With this paper, we are the first ones to address the development of novel tools for automatic discovery of routine days of an individual from his/her egocentric images. In the proposed model, sequences of images are firstly characterized by semantic labels detected by pre-trained CNNs. Then, these features are organized in temporal-semantic documents to later be embedded into a topic models space. Finally, Dynamic-Time-Warping and Spectral-Clustering methods are used for final day routine/non-routine discrimination. Moreover, we introduce a new EgoRoutine-dataset, a collection of 104 egocentric days with more than 100.000 images recorded by 7 users. Results show that routine can be discovered and behavioural patterns can be observed. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ TWP2020 | Serial | 3435 | ||
Permanent link to this record | |||||
Author | Gemma Rotger | ||||
Title | Lifelike Humans: Detailed Reconstruction of Expressive Human Faces | Type | Book Whole | ||
Year | 2021 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
Developing human-like digital characters is a challenging task since humans are used to recognizing our fellows, and find the computed generated characters inadequately humanized. To fulfill the standards of the videogame and digital film productions it is necessary to model and animate these characters the most closely to human beings. However, it is an arduous and expensive task, since many artists and specialists are required to work on a single character. Therefore, to fulfill these requirements we found an interesting option to study the automatic creation of detailed characters through inexpensive setups. In this work, we develop novel techniques to bring detailed characters by combining different aspects that stand out when developing realistic characters, skin detail, facial hairs, expressions, and microexpressions. We examine each of the mentioned areas with the aim of automatically recover each of the parts without user interaction nor training data. We study the problems for their robustness but also for the simplicity of the setup, preferring single-image with uncontrolled illumination and methods that can be easily computed with the commodity of a standard laptop. A detailed face with wrinkles and skin details is vital to develop a realistic character. In this work, we introduce our method to automatically describe facial wrinkles on the image and transfer to the recovered base face. Then we advance to facial hair recovery by resolving a fitting problem with a novel parametrization model. As of last, we develop a mapping function that allows transfer expressions and microexpressions between different meshes, which provides realistic animations to our detailed mesh. We cover all the mentioned points with the focus on key aspects as (i) how to describe skin wrinkles in a simple and straightforward manner, (ii) how to recover 3D from 2D detections, (iii) how to recover and model facial hair from 2D to 3D, (iv) how to transfer expressions between models holding both skin detail and facial hair, (v) how to perform all the described actions without training data nor user interaction. In this work, we present our proposals to solve these aspects with an efficient and simple setup. We validate our work with several datasets both synthetic and real data, prooving remarkable results even in challenging cases as occlusions as glasses, thick beards, and indeed working with different face topologies like single-eyed cyclops. | ||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Felipe Lumbreras;Antonio Agudo | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-122714-3-0 | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ Rot2021 | Serial | 3513 | ||
Permanent link to this record | |||||
Author | Pau Torras; Mohamed Ali Souibgui; Sanket Biswas; Alicia Fornes | ||||
Title | Segmentation-Free Alignment of Arbitrary Symbol Transcripts to Images | Type | Conference Article | ||
Year | 2023 | Publication | Document Analysis and Recognition – ICDAR 2023 Workshops | Abbreviated Journal | |
Volume | 14193 | Issue | Pages | 83-93 | |
Keywords | Historical Manuscripts; Symbol Alignment | ||||
Abstract ![]() |
Developing arbitrary symbol recognition systems is a challenging endeavour. Even using content-agnostic architectures such as few-shot models, performance can be substantially improved by providing a number of well-annotated examples into training. In some contexts, transcripts of the symbols are available without any position information associated to them, which enables using line-level recognition architectures. A way of providing this position information to detection-based architectures is finding systems that can align the input symbols with the transcription. In this paper we discuss some symbol alignment techniques that are suitable for low-data scenarios and provide an insight on their perceived strengths and weaknesses. In particular, we study the usage of Connectionist Temporal Classification models, Attention-Based Sequence to Sequence models and we compare them with the results obtained on a few-shot recognition system. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ TSS2023 | Serial | 3850 | ||
Permanent link to this record | |||||
Author | Sounak Dey; Palaiahnakote Shivakumara; K.S. Raghunanda; Umapada Pal; Tong Lu; G. Hemantha Kumar; Chee Seng Chan | ||||
Title | Script independent approach for multi-oriented text detection in scene image | Type | Journal Article | ||
Year | 2017 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | 242 | Issue | Pages | 96-112 | |
Keywords | |||||
Abstract ![]() |
Developing a text detection method which is invariant to scripts in natural scene images is a challeng- ing task due to different geometrical structures of various scripts. Besides, multi-oriented of text lines in natural scene images make the problem more challenging. This paper proposes to explore ring radius transform (RRT) for text detection in multi-oriented and multi-script environments. The method finds component regions based on convex hull to generate radius matrices using RRT. It is a fact that RRT pro- vides low radius values for the pixels that are near to edges, constant radius values for the pixels that represent stroke width, and high radius values that represent holes created in background and convex hull because of the regular structures of text components. We apply k -means clustering on the radius matrices to group such spatially coherent regions into individual clusters. Then the proposed method studies the radius values of such cluster components that are close to the centroid and far from the cen- troid to detect text components. Furthermore, we have developed a Bangla dataset (named as ISI-UM dataset) and propose a semi-automatic system for generating its ground truth for text detection of arbi- trary orientations, which can be used by the researchers for text detection and recognition in the future. The ground truth will be released to public. Experimental results on our ISI-UM data and other standard datasets, namely, ICDAR 2013 scene, SVT and MSRA data, show that the proposed method outperforms the existing methods in terms of multi-lingual and multi-oriented text detection ability. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DSR2017 | Serial | 3260 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Alicia Fornes; Ernest Valveny; Josep Llados | ||||
Title | A Bag of Notes Approach to Writer Identification in Old Handwritten Music Scores | Type | Conference Article | ||
Year | 2010 | Publication | 9th IAPR International Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 247–254 | ||
Keywords | |||||
Abstract ![]() |
Determining the authorship of a document, namely writer identification, can be an important source of information for document categorization. Contrary to text documents, the identification of the writer of graphical documents is still a challenge. In this paper we present a robust approach for writer identification in a particular kind of graphical documents, old music scores. This approach adapts the bag of visual terms method for coping with graphic documents. The identification is performed only using the graphical music notation. For this purpose, we generate a graphic vocabulary without recognizing any music symbols, and consequently, avoiding the difficulties in the recognition of hand-drawn symbols in old and degraded documents. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving very high identification rates. | ||||
Address | Boston; USA; | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-60558-773-8 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ GFV2010 | Serial | 1320 | ||
Permanent link to this record | |||||
Author | Antonio Lopez; Joan Serrat; Cristina Cañero; Felipe Lumbreras; T. Graf | ||||
Title | Robust lane markings detection and road geometry computation | Type | Journal Article | ||
Year | 2010 | Publication | International Journal of Automotive Technology | Abbreviated Journal | IJAT |
Volume | 11 | Issue | 3 | Pages | 395–407 |
Keywords | lane markings | ||||
Abstract ![]() |
Detection of lane markings based on a camera sensor can be a low-cost solution to lane departure and curve-over-speed warnings. A number of methods and implementations have been reported in the literature. However, reliable detection is still an issue because of cast shadows, worn and occluded markings, variable ambient lighting conditions, for example. We focus on increasing detection reliability in two ways. First, we employed an image feature other than the commonly used edges: ridges, which we claim addresses this problem better. Second, we adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation. In addition, the model was fitted for the left and right lane lines simultaneously to enforce a consistent result. Four measures of interest for driver assistance applications were directly computed from the fitted parametric model at each frame: lane width, lane curvature, and vehicle yaw angle and lateral offset with regard the lane medial axis. We qualitatively assessed our method in video sequences captured on several road types and under very different lighting conditions. We also quantitatively assessed it on synthetic but realistic video sequences for which road geometry and vehicle trajectory ground truth are known. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | The Korean Society of Automotive Engineers | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1229-9138 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ LSC2010 | Serial | 1300 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | Evaluating Color Representation for Online Road Detection | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars | Abbreviated Journal | |
Volume | Issue | Pages | 594-595 | ||
Keywords | |||||
Abstract ![]() |
Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired using an on-board camera in different real-driving situations. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVVT:E2M | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGL2013 | Serial | 2794 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Antonio Lopez; Theo Gevers; Felipe Lumbreras | ||||
Title | Combining Priors, Appearance and Context for Road Detection | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 15 | Issue | 3 | Pages | 1168-1178 |
Keywords | Illuminant invariance; lane markings; road detection; road prior; road scene understanding; vanishing point; 3-D scene layout | ||||
Abstract ![]() |
Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning.
Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.076;ISE | Approved | no | ||
Call Number | Admin @ si @ ALG2014 | Serial | 2501 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | Towards social interaction detection in egocentric photo-streams | Type | Conference Article | ||
Year | 2015 | Publication | Proceedings of SPIE, 8th International Conference on Machine Vision , ICMV 2015 | Abbreviated Journal | |
Volume | 9875 | Issue | Pages | ||
Keywords | |||||
Abstract ![]() |
Detecting social interaction in videos relying solely on visual cues is a valuable task that is receiving increasing attention in recent years. In this work, we address this problem in the challenging domain of egocentric photo-streams captured by a low temporal resolution wearable camera (2fpm). The major difficulties to be handled in this context are the sparsity of observations as well as unpredictability of camera motion and attention orientation due to the fact that the camera is worn as part of clothing. Our method consists of four steps: multi-faces localization and tracking, 3D localization, pose estimation and analysis of f-formations. By estimating pair-to-pair interaction probabilities over the sequence, our method states the presence or absence of interaction with the camera wearer and specifies which people are more involved in the interaction. We tested our method over a dataset of 18.000 images and we show its reliability on our considered purpose. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICMV | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ ADR2015a | Serial | 2702 | ||
Permanent link to this record |