Home | [51–60] << 61 62 63 64 65 66 67 68 69 70 >> [71–80] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Mikhail Mozerov; Ariel Amato; Xavier Roca; Jordi Gonzalez | ||||
Title | Solving the Multi Object Occlusion Problem in a Multiple Camera Tracking System | Type | Journal | ||
Year | 2009 | Publication | Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume ![]() |
19 | Issue | 1 | Pages | 165-171 |
Keywords | |||||
Abstract | An efficient method to overcome adverse effects of occlusion upon object tracking is presented. The method is based on matching paths of objects in time and solves a complex occlusion-caused problem of merging separate segments of the same path. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1054-6618 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | ISE @ ise @ MAR2009a | Serial | 1160 | ||
Permanent link to this record | |||||
Author | Francisco Blanco; Felipe Lumbreras; Joan Serrat; Roswitha Siener; Silvia Serranti; Giuseppe Bonifazi; Montserrat Lopez Mesas; Manuel Valiente | ||||
Title | Taking advantage of Hyperspectral Imaging classification of urinary stones against conventional IR Spectroscopy | Type | Journal Article | ||
Year | 2014 | Publication | Journal of Biomedical Optics | Abbreviated Journal | JBiO |
Volume ![]() |
19 | Issue | 12 | Pages | 126004-1 - 126004-9 |
Keywords | |||||
Abstract | The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | Admin @ si @ BLS2014 | Serial | 2563 | ||
Permanent link to this record | |||||
Author | J.Kuhn; A.Nussbaumer; J.Pirker; Dimosthenis Karatzas; A. Pagani; O.Conlan; M.Memmel; C.M.Steiner; C.Gutl; D.Albert; Andreas Dengel | ||||
Title | Advancing Physics Learning Through Traversing a Multi-Modal Experimentation Space | Type | Conference Article | ||
Year | 2015 | Publication | Workshop Proceedings on the 11th International Conference on Intelligent Environments | Abbreviated Journal | |
Volume ![]() |
19 | Issue | Pages | 373-380 | |
Keywords | |||||
Abstract | Translating conceptual knowledge into real world experiences presents a significant educational challenge. This position paper presents an approach that supports learners in moving seamlessly between conceptual learning and their application in the real world by bringing physical and virtual experiments into everyday settings. Learners are empowered in conducting these situated experiments in a variety of physical settings by leveraging state of the art mobile, augmented reality, and virtual reality technology. A blend of mobile-based multi-sensory physical experiments, augmented reality and enabling virtual environments can allow learners to bridge their conceptual learning with tangible experiences in a completely novel manner. This approach focuses on the learner by applying self-regulated personalised learning techniques, underpinned by innovative pedagogical approaches and adaptation techniques, to ensure that the needs and preferences of each learner are catered for individually. | ||||
Address | Praga; Chzech Republic; July 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IE | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ KNP2015 | Serial | 2694 | ||
Permanent link to this record | |||||
Author | Lluis Gomez; Dimosthenis Karatzas | ||||
Title | A fast hierarchical method for multi‐script and arbitrary oriented scene text extraction | Type | Journal Article | ||
Year | 2016 | Publication | International Journal on Document Analysis and Recognition | Abbreviated Journal | IJDAR |
Volume ![]() |
19 | Issue | 4 | Pages | 335-349 |
Keywords | scene text; segmentation; detection; hierarchical grouping; perceptual organisation | ||||
Abstract | Typography and layout lead to the hierarchical organisation of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing text detection methods. This paper addresses the problem of text
segmentation in natural scenes from a hierarchical perspective. Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based in perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state of the art methods in unconstrained scenarios. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.056; 601.197 | Approved | no | ||
Call Number | Admin @ si @ GoK2016a | Serial | 2862 | ||
Permanent link to this record | |||||
Author | Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz | ||||
Title | Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume ![]() |
19 | Issue | 5 | Pages | 1100 - 1113 |
Keywords | |||||
Abstract | This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ MJS2017 | Serial | 2964 | ||
Permanent link to this record | |||||
Author | Luis Herranz; Shuqiang Jiang; Ruihan Xu | ||||
Title | Modeling Restaurant Context for Food Recognition | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume ![]() |
19 | Issue | 2 | Pages | 430 - 440 |
Keywords | |||||
Abstract | Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ HJX2017 | Serial | 2965 | ||
Permanent link to this record | |||||
Author | David Berga; C. Wloka; JK. Tsotsos | ||||
Title | Modeling task influences for saccade sequence and visual relevance prediction | Type | Journal Article | ||
Year | 2019 | Publication | Journal of Vision | Abbreviated Journal | JV |
Volume ![]() |
19 | Issue | 10 | Pages | 106c-106c |
Keywords | |||||
Abstract | Previous work from Wloka et al. (2017) presented the Selective Tuning Attentive Reference model Fixation Controller (STAR-FC), an active vision model for saccade prediction. Although the model is able to efficiently predict saccades during free-viewing, it is well known that stimulus and task instructions can strongly affect eye movement patterns (Yarbus, 1967). These factors are considered in previous Selective Tuning architectures (Tsotsos and Kruijne, 2014)(Tsotsos, Kotseruba and Wloka, 2016)(Rosenfeld, Biparva & Tsotsos 2017), proposing a way to combine bottom-up and top-down contributions to fixation and saccade programming. In particular, task priming has been shown to be crucial to the deployment of eye movements, involving interactions between brain areas related to goal-directed behavior, working and long-term memory in combination with stimulus-driven eye movement neuronal correlates. Initial theories and models of these influences include (Rao, Zelinsky, Hayhoe and Ballard, 2002)(Navalpakkam and Itti, 2005)(Huang and Pashler, 2007) and show distinct ways to process the task requirements in combination with bottom-up attention. In this study we extend the STAR-FC with novel computational definitions of Long-Term Memory, Visual Task Executive and a Task Relevance Map. With these modules we are able to use textual instructions in order to guide the model to attend to specific categories of objects and/or places in the scene. We have designed our memory model by processing a hierarchy of visual features learned from salient object detection datasets. The relationship between the executive task instructions and the memory representations has been specified using a tree of semantic similarities between the learned features and the object category labels. Results reveal that by using this model, the resulting relevance maps and predicted saccades have a higher probability to fall inside the salient regions depending on the distinct task instructions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT; 600.128; 600.120 | Approved | no | ||
Call Number | Admin @ si @ BWT2019 | Serial | 3308 | ||
Permanent link to this record | |||||
Author | Hao Fang; Ajian Liu; Jun Wan; Sergio Escalera; Chenxu Zhao; Xu Zhang; Stan Z Li; Zhen Lei | ||||
Title | Surveillance Face Anti-spoofing | Type | Journal Article | ||
Year | 2024 | Publication | IEEE Transactions on Information Forensics and Security | Abbreviated Journal | TIFS |
Volume ![]() |
19 | Issue | Pages | 1535-1546 | |
Keywords | |||||
Abstract | Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ FLW2024 | Serial | 3869 | ||
Permanent link to this record | |||||
Author | D. Seron; F. Moreso; C. Gratin; Jordi Vitria; E. Condom | ||||
Title | Automated classification of renal interstitium and tubules by local texture analysis and a neural network | Type | Journal Article | ||
Year | 1996 | Publication | Analytical and Quantitative Cytology and Histology | Abbreviated Journal | |
Volume ![]() |
18 | Issue | 5 | Pages | 410-9, PMID: 8908314 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ SMG1996 | Serial | 76 | ||
Permanent link to this record | |||||
Author | Josep Llados; Gemma Sanchez | ||||
Title | Graph Matching vs. Graph Parsing in Graphics Recognition: A Combined Approach | Type | Journal | ||
Year | 2004 | Publication | International Journal of Pattern Recognition and Artificial Intelligence | Abbreviated Journal | IJPRAI |
Volume ![]() |
18 | Issue | 3 | Pages | 455–473 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; IF: 0.588 | Approved | no | ||
Call Number | DAG @ dag @ LlS2004 | Serial | 445 | ||
Permanent link to this record | |||||
Author | Matthias S. Keil | ||||
Title | Smooth Gradient Representations as a Unifying Account of Chevreul’s Illusion, Mach Bands, and a Variant of the Ehrenstein Disk | Type | Journal | ||
Year | 2006 | Publication | Neural Computation | Abbreviated Journal | NEURALCOMPUT |
Volume ![]() |
18 | Issue | 4 | Pages | 871–903 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Kei2006 | Serial | 633 | ||
Permanent link to this record | |||||
Author | A. Diplaros; N. Vlassis; Theo Gevers | ||||
Title | A Spatially Constrained Generative Model and an EM Algorithm for Image Segmentation | Type | Journal | ||
Year | 2007 | Publication | IEEE Transactions on Neural Networks | Abbreviated Journal | |
Volume ![]() |
18 | Issue | 3 | Pages | 798-808 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ DVG2007 | Serial | 947 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Angel Sappa | ||||
Title | Real Time Image Registration for Planar Structure and 3D Sensor Pose Estimation | Type | Book Chapter | ||
Year | 2008 | Publication | Stereo Vision | Abbreviated Journal | |
Volume ![]() |
18 | Issue | Pages | 299–316 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Asim Bhatti | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ DoS2008c | Serial | 1057 | ||
Permanent link to this record | |||||
Author | Santiago Segui; Michal Drozdzal; Ekaterina Zaytseva; Fernando Azpiroz; Petia Radeva; Jordi Vitria | ||||
Title | Detection of wrinkle frames in endoluminal videos using betweenness centrality measures for images | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Information Technology in Biomedicine | Abbreviated Journal | TITB |
Volume ![]() |
18 | Issue | 6 | Pages | 1831-1838 |
Keywords | Wireless Capsule Endoscopy; Small Bowel Motility Dysfunction; Contraction Detection; Structured Prediction; Betweenness Centrality | ||||
Abstract | Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state of the art performance for this task. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR; MILAB; 600.046;MV | Approved | no | ||
Call Number | Admin @ si @ SDZ2014 | Serial | 2385 | ||
Permanent link to this record | |||||
Author | Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus | ||||
Title | Learning Color Names for Real-World Applications | Type | Journal Article | ||
Year | 2009 | Publication | IEEE Transaction in Image Processing | Abbreviated Journal | TIP |
Volume ![]() |
18 | Issue | 7 | Pages | 1512–1524 |
Keywords | |||||
Abstract | Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | CAT @ cat @ WSV2009 | Serial | 1195 | ||
Permanent link to this record |