Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Simone Balocco; Carlo Gatta; Xavier Carrillo; J. Mauri; Petia Radeva | ||||
Title | Plaque Type, Plaque Burden and Wall Shear Stress Relation in Coronary Arteries Assessed by X-ray Angiography and Intravascular Ultrasound: a Qualitative Study | Type | Conference Article | ||
Year | 2011 | Publication | 14th International Symposium on Applied Sciences in Biomedical and Communication Technologies | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper, we present a complete framework that automatically provides fluid-dynamic and plaque analysis from IVUS and Angiographic sequences. Such framework is used to analyze, in three coronary arteries, the relation between wall shear stress with type and amount of plaque. Preliminary qualitative results show an inverse relation between the wall shear stress and the plaque burden, which is confirmed by the fact that the plaque growth is higher on the wall having concave curvature. Regarding the plaque type it was observed that regions having low shear stress are predominantly fibro-lipidic while the heavy calcifications are in general located in areas of the vessel having high WSS. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-0913-4 | Medium | ||
Area | Expedition | Conference | ISABEL | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ BGC2011b | Serial | 1799 | ||
Permanent link to this record | |||||
Author | David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin | ||||
Title | Virtual Worlds and Active Learning for Human Detection | Type | Conference Article | ||
Year | 2011 | Publication | 13th International Conference on Multimodal Interaction | Abbreviated Journal | |
Volume | Issue | Pages | 393-400 | ||
Keywords | Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning | ||||
Abstract | Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid. | ||||
Address | Alicante, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | ACM DL | Place of Publication | New York, NY, USA, USA | Editor | |
Language | English | Summary Language | English | Original Title | Virtual Worlds and Active Learning for Human Detection |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-0641-6 | Medium | ||
Area | Expedition | Conference | ICMI | ||
Notes | ADAS | Approved | yes | ||
Call Number | ADAS @ adas @ VLP2011a | Serial | 1683 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Volkmar Frinken; Andreas Fischer; Jon Almazan; G. Jackson; Horst Bunke | ||||
Title | A Keyword Spotting Approach Using Blurred Shape Model-Based Descriptors | Type | Conference Article | ||
Year | 2011 | Publication | Proceedings of the 2011 Workshop on Historical Document Imaging and Processing | Abbreviated Journal | |
Volume | Issue | Pages | 83-90 | ||
Keywords | |||||
Abstract | The automatic processing of handwritten historical documents is considered a hard problem in pattern recognition. In addition to the challenges given by modern handwritten data, a lack of training data as well as effects caused by the degradation of documents can be observed. In this scenario, keyword spotting arises to be a viable solution to make documents amenable for searching and browsing. For this task we propose the adaptation of shape descriptors used in symbol recognition. By treating each word image as a shape, it can be represented using the Blurred Shape Model and the De-formable Blurred Shape Model. Experiments on the George Washington database demonstrate that this approach is able to outperform the commonly used Dynamic Time Warping approach. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | ACM | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-0916-5 | Medium | ||
Area | Expedition | Conference | HIP | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ FFF2011a | Serial | 1823 | ||
Permanent link to this record | |||||
Author | Andreas Fischer; Volkmar Frinken; Alicia Fornes; Horst Bunke | ||||
Title | Transcription Alignment of Latin Manuscripts Using Hidden Markov Models | Type | Conference Article | ||
Year | 2011 | Publication | Proceedings of the 2011 Workshop on Historical Document Imaging and Processing | Abbreviated Journal | |
Volume | Issue | Pages | 29-36 | ||
Keywords | |||||
Abstract | Transcriptions of historical documents are a valuable source for extracting labeled handwriting images that can be used for training recognition systems. In this paper, we introduce the Saint Gall database that includes images as well as the transcription of a Latin manuscript from the 9th century written in Carolingian script. Although the available transcription is of high quality for a human reader, the spelling of the words is not accurate when compared with the handwriting image. Hence, the transcription poses several challenges for alignment regarding, e.g., line breaks, abbreviations, and capitalization. We propose an alignment system based on character Hidden Markov Models that can cope with these challenges and efficiently aligns complete document pages. On the Saint Gall database, we demonstrate that a considerable alignment accuracy can be achieved, even with weakly trained character models. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | ACM | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HIP | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ FFF2011b | Serial | 1824 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo | ||||
Title | Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes | Type | Journal Article | ||
Year | 2011 | Publication | Computer Graphics Forum | Abbreviated Journal | CGF |
Volume | 30 | Issue | 7 | Pages | 2107-2115 |
Keywords | |||||
Abstract | IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; HuPBA | Approved | no | ||
Call Number | Admin @ si @ EPA2011 | Serial | 1881 | ||
Permanent link to this record | |||||
Author | Aura Hernandez-Sabate; Debora Gil; Jaume Garcia; Enric Marti | ||||
Title | Image-based Cardiac Phase Retrieval in Intravascular Ultrasound Sequences | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control | Abbreviated Journal | T-UFFC |
Volume | 58 | Issue | 1 | Pages | 60-72 |
Keywords | 3-D exploring; ECG; band-pass filter; cardiac motion; cardiac phase retrieval; coronary arteries; electrocardiogram signal; image intensity local mean evolution; image-based cardiac phase retrieval; in vivo pullbacks acquisition; intravascular ultrasound sequences; longitudinal motion; signal extrema; time 36 ms; band-pass filters; biomedical ultrasonics; cardiovascular system; electrocardiography; image motion analysis; image retrieval; image sequences; medical image processing; ultrasonic imaging | ||||
Abstract | Longitudinal motion during in vivo pullbacks acquisition of intravascular ultrasound (IVUS) sequences is a major artifact for 3-D exploring of coronary arteries. Most current techniques are based on the electrocardiogram (ECG) signal to obtain a gated pullback without longitudinal motion by using specific hardware or the ECG signal itself. We present an image-based approach for cardiac phase retrieval from coronary IVUS sequences without an ECG signal. A signal reflecting cardiac motion is computed by exploring the image intensity local mean evolution. The signal is filtered by a band-pass filter centered at the main cardiac frequency. Phase is retrieved by computing signal extrema. The average frame processing time using our setup is 36 ms. Comparison to manually sampled sequences encourages a deeper study comparing them to ECG signals. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0885-3010 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | IAM;ADAS | Approved | no | ||
Call Number | IAM @ iam @ HGG2011 | Serial | 1546 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Alicia Fornes; Oriol Pujol; Josep Llados; Petia Radeva | ||||
Title | Circular Blurred Shape Model for Multiclass Symbol Recognition | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Systems, Man and Cybernetics (Part B) (IEEE) | Abbreviated Journal | TSMCB |
Volume | 41 | Issue | 2 | Pages | 497-506 |
Keywords | |||||
Abstract | In this paper, we propose a circular blurred shape model descriptor to deal with the problem of symbol detection and classification as a particular case of object recognition. The feature extraction is performed by capturing the spatial arrangement of significant object characteristics in a correlogram structure. The shape information from objects is shared among correlogram regions, where a prior blurring degree defines the level of distortion allowed in the symbol, making the descriptor tolerant to irregular deformations. Moreover, the descriptor is rotation invariant by definition. We validate the effectiveness of the proposed descriptor in both the multiclass symbol recognition and symbol detection domains. In order to perform the symbol detection, the descriptors are learned using a cascade of classifiers. In the case of multiclass categorization, the new feature space is learned using a set of binary classifiers which are embedded in an error-correcting output code design. The results over four symbol data sets show the significant improvements of the proposed descriptor compared to the state-of-the-art descriptors. In particular, the results are even more significant in those cases where the symbols suffer from elastic deformations. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1083-4419 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB; DAG;HuPBA | Approved | no | ||
Call Number | Admin @ si @ EFP2011 | Serial | 1784 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; Theo Gevers | ||||
Title | Color Constancy Using Natural Image Statistics and Scene Semantics | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 33 | Issue | 4 | Pages | 687-698 |
Keywords | |||||
Abstract | Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ GiG2011 | Serial | 1724 | ||
Permanent link to this record | |||||
Author | Maria Salamo; Sergio Escalera | ||||
Title | Increasing Retrieval Quality in Conversational Recommenders | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Knowledge and Data Engineering | Abbreviated Journal | TKDE |
Volume | 99 | Issue | Pages | 1-1 | |
Keywords | |||||
Abstract | IF JCR CCIA 2.286 2009 24/103
JCR Impact Factor 2010: 1.851 A major task of research in conversational recommender systems is personalization. Critiquing is a common and powerful form of feedback, where a user can express her feature preferences by applying a series of directional critiques over the recommendations instead of providing specific preference values. Incremental Critiquing is a conversational recommender system that uses critiquing as a feedback to efficiently personalize products. The expectation is that in each cycle the system retrieves the products that best satisfy the user’s soft product preferences from a minimal information input. In this paper, we present a novel technique that increases retrieval quality based on a combination of compatibility and similarity scores. Under the hypothesis that a user learns Turing the recommendation process, we propose two novel exponential reinforcement learning approaches for compatibility that take into account both the instant at which the user makes a critique and the number of satisfied critiques. Moreover, we consider that the impact of features on the similarity differs according to the preferences manifested by the user. We propose a global weighting approach that uses a common weight for nearest cases in order to focus on groups of relevant products. We show that our methodology significantly improves recommendation efficiency in four data sets of different sizes in terms of session length in comparison with state-of-the-art approaches. Moreover, our recommender shows higher robustness against noisy user data when compared to classical approaches |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1041-4347 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB; HuPBA | Approved | no | ||
Call Number | Admin @ si @ SaE2011 | Serial | 1713 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Jose Manuel Alvarez; Angel Sappa; Antonio Lopez | ||||
Title | A New Framework for Stereo Sensor Pose through Road Segmentation and Registration | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 12 | Issue | 4 | Pages | 954-966 |
Keywords | road detection | ||||
Abstract | This paper proposes a new framework for real-time estimation of the onboard stereo head's position and orientation relative to the road surface, which is required for any advanced driver-assistance application. This framework can be used with all road types: highways, urban, etc. Unlike existing works that rely on feature extraction in either the image domain or 3-D space, we propose a framework that directly estimates the unknown parameters from the stream of stereo pairs' brightness. The proposed approach consists of two stages that are invoked for every stereo frame. The first stage segments the road region in one monocular view. The second stage estimates the camera pose using a featureless registration between the segmented monocular road region and the other view in the stereo pair. This paper has two main contributions. The first contribution combines a road segmentation algorithm with a registration technique to estimate the online stereo camera pose. The second contribution solves the registration using a featureless method, which is carried out using two different optimization techniques: 1) the differential evolution algorithm and 2) the Levenberg-Marquardt (LM) algorithm. We provide experiments and evaluations of performance. The results presented show the validity of our proposed framework. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ DAS2011; ADAS @ adas @ das2011a | Serial | 1833 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Antonio Lopez | ||||
Title | Road Detection Based on Illuminant Invariance | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 12 | Issue | 1 | Pages | 184-193 |
Keywords | road detection | ||||
Abstract | By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ AlL2011 | Serial | 1456 | ||
Permanent link to this record | |||||
Author | Ariel Amato; Mikhail Mozerov; Andrew Bagdanov; Jordi Gonzalez | ||||
Title | Accurate Moving Cast Shadow Suppression Based on Local Color Constancy detection | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 20 | Issue | 10 | Pages | 2954 - 2966 |
Keywords | |||||
Abstract | This paper describes a novel framework for detection and suppression of properly shadowed regions for most possible scenarios occurring in real video sequences. Our approach requires no prior knowledge about the scene, nor is it restricted to specific scene structures. Furthermore, the technique can detect both achromatic and chromatic shadows even in the presence of camouflage that occurs when foreground regions are very similar in color to shadowed regions. The method exploits local color constancy properties due to reflectance suppression over shadowed regions. To detect shadowed regions in a scene, the values of the background image are divided by values of the current frame in the RGB color space. We show how this luminance ratio can be used to identify segments with low gradient constancy, which in turn distinguish shadows from foreground. Experimental results on a collection of publicly available datasets illustrate the superior performance of our method compared with the most sophisticated, state-of-the-art shadow detection algorithms. These results show that our approach is robust and accurate over a broad range of shadow types and challenging video conditions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1057-7149 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ AMB2011 | Serial | 1716 | ||
Permanent link to this record | |||||
Author | Jose Seabra; Francesco Ciompi; Oriol Pujol; J. Mauri; Petia Radeva; Joao Sanchez | ||||
Title | Rayleigh Mixture Model for Plaque Characterization in Intravascular Ultrasound | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Biomedical Engineering | Abbreviated Journal | TBME |
Volume | 58 | Issue | 5 | Pages | 1314-1324 |
Keywords | |||||
Abstract | Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneous areas in ultrasound images. Since plaques may contain tissues with heterogeneous regions, more complex distributions depending on multiple parameters are usually needed, such as Rice, K or Nakagami distributions. In such cases, the problem formulation becomes more complex, and the optimization procedure to estimate the plaque echomorphology is more difficult. Here, we propose to model the tissue echomorphology by means of a mixture of Rayleigh distributions, known as the Rayleigh mixture model (RMM). The problem formulation is still simple, but its ability to describe complex textural patterns is very powerful. In this paper, we present a method for the automatic estimation of the RMM mixture parameters by means of the expectation maximization algorithm, which aims at characterizing tissue echomorphology in ultrasound (US). The performance of the proposed model is evaluated with a database of in vitro intravascular US cases. We show that the mixture coefficients and Rayleigh parameters explicitly derived from the mixture model are able to accurately describe different plaque types and to significantly improve the characterization performance of an already existing methodology. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ SCP2011 | Serial | 1712 | ||
Permanent link to this record | |||||
Author | Jaime Moreno; Xavier Otazu | ||||
Title | Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder | Type | Conference Article | ||
Year | 2011 | Publication | IEEE International Conference on Multimedia and Expo | Abbreviated Journal | |
Volume | Issue | Pages | 1-6 | ||
Keywords | |||||
Abstract | In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1945-7871 | ISBN | 978-1-61284-348-3 | Medium | |
Area | Expedition | Conference | ICME | ||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ MoO2011a | Serial | 2176 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Anjan Dutta; Albert Gordo; Josep Llados | ||||
Title | The ICDAR 2011 Music Scores Competition: Staff Removal and Writer Identification | Type | Conference Article | ||
Year | 2011 | Publication | 11th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1511-1515 | ||
Keywords | |||||
Abstract | In the last years, there has been a growing interest in the analysis of handwritten music scores. In this sense, our goal has been to foster the interest in the analysis of handwritten music scores by the proposal of two different competitions: Staff removal and Writer Identification. Both competitions have been tested on the CVC-MUSCIMA database: a ground-truth of handwritten music score images. This paper describes the competition details, including the dataset and ground-truth, the evaluation metrics, and a short description of the participants, their methods, and the obtained results. | ||||
Address | Beijing, China | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-0-7695-4520-2 | Medium | ||
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ FDG2011b | Serial | 1794 | ||
Permanent link to this record |