toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Kaida Xiao; Chenyang Fu; Dimosthenis Karatzas; Sophie Wuerger edit  doi
openurl 
  Title Visual Gamma Correction for LCD Displays Type Journal Article
  Year 2011 Publication Displays Abbreviated Journal DIS  
  Volume 32 Issue 1 Pages 17-23  
  Keywords Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
  Abstract An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number (down) Admin @ si @ XFK2011 Serial 1815  
Permanent link to this record
 

 
Author Joost Van de Weijer; Shida Beigpour edit   pdf
url  isbn
openurl 
  Title The Dichromatic Reflection Model: Future Research Directions and Applications Type Conference Article
  Year 2011 Publication International Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords dblp  
  Abstract The dichromatic reflection model (DRM) predicts that color distributions form a parallelogram in color space, whose shape is defined by the body reflectance and the illuminant color. In this paper we resume the assumptions which led to the DRM and shortly recall two of its main applications domains: color image segmentation and photometric invariant feature computation. After having introduced the model we discuss several limitations of the theory, especially those which are raised once working on real-world uncalibrated images. In addition, we summerize recent extensions of the model which allow to handle more complicated light interactions. Finally, we suggest some future research directions which would further extend its applicability.  
  Address Algarve, Portugal  
  Corporate Author Thesis  
  Publisher SciTePress Place of Publication Editor Mestetskiy, Leonid and Braz, José  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-989-8425-47-8 Medium  
  Area Expedition Conference VISIGRAPP  
  Notes CIC Approved no  
  Call Number (down) Admin @ si @ WeB2011 Serial 1778  
Permanent link to this record
 

 
Author Jordi Vitria; Joao Sanchez; Miguel Raposo; Mario Hernandez edit  isbn
openurl 
  Title Pattern Recognition and Image Analysis Type Book Whole
  Year 2011 Publication 5th Iberian Conference Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 6669 Issue Pages  
  Keywords  
  Abstract  
  Address Las Palmas de Gran Canaria. Spain  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Berlin Editor J. Vitrià; J. Sanchez; M. Raposo; M. Hernandez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-642-2125 Medium  
  Area Expedition Conference IbPRIA  
  Notes OR;MV Approved no  
  Call Number (down) Admin @ si @ VSR2011 Serial 1730  
Permanent link to this record
 

 
Author Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich edit   pdf
url  isbn
openurl 
  Title Perception Based Representations for Computational Colour Type Conference Article
  Year 2011 Publication 3rd International Workshop on Computational Color Imaging Abbreviated Journal  
  Volume 6626 Issue Pages 16-30  
  Keywords colour perception, induction, naming, psychophysical data, saliency, segmentation  
  Abstract The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.  
  Address Milan, Italy  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor Raimondo Schettini, Shoji Tominaga, Alain Trémeau  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-642-20403-6 Medium  
  Area Expedition Conference CCIW  
  Notes CIC Approved no  
  Call Number (down) Admin @ si @ VMB2011 Serial 1733  
Permanent link to this record
 

 
Author Eduard Vazquez; Ramon Baldrich; Joost Van de Weijer; Maria Vanrell edit   pdf
url  doi
openurl 
  Title Describing Reflectances for Colour Segmentation Robust to Shadows, Highlights and Textures Type Journal Article
  Year 2011 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 33 Issue 5 Pages 917-930  
  Keywords  
  Abstract The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.  
  Address Los Alamitos; CA; USA;  
  Corporate Author Thesis  
  Publisher IEEE Computer Society Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number (down) Admin @ si @ VBW2011 Serial 1715  
Permanent link to this record
 

 
Author Eduard Vazquez edit  openurl
  Title Unsupervised image segmentation based on material reflectance description and saliency Type Book Whole
  Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Image segmentations aims to partition an image into a set of non-overlapped regions, called segments. Despite the simplicity of the definition, image segmentation raises as a very complex problem in all its stages. The definition of segment is still unclear. When asking to a human to perform a segmentation, this person segments at different levels of abstraction. Some segments might be a single, well-defined texture whereas some others correspond with an object in the scene which might including multiple textures and colors. For this reason, segmentation is divided in bottom-up segmentation and top-down segmentation. Bottom up-segmentation is problem independent, that is, focused on general properties of the images such as textures or illumination. Top-down segmentation is a problem-dependent approach which looks for specific entities in the scene, such as known objects. This work is focused on bottom-up segmentation. Beginning from the analysis of the lacks of current methods, we propose an approach called RAD. Our approach overcomes the main shortcomings of those methods which use the physics of the light to perform the segmentation. RAD is a topological approach which describes a single-material reflectance. Afterwards, we cope with one of the main problems in image segmentation: non supervised adaptability to image content. To yield a non-supervised method, we use a model of saliency yet presented in this thesis. It computes the saliency of the chromatic transitions of an image by means of a statistical analysis of the images derivatives. This method of saliency is used to build our final approach of segmentation: spRAD. This method is a non-supervised segmentation approach. Our saliency approach has been validated with a psychophysical experiment as well as computationally, overcoming a state-of-the-art saliency method. spRAD also outperforms state-of-the-art segmentation techniques as results obtained with a widely-used segmentation dataset show  
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Place of Publication Editor Ramon Baldrich  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number (down) Admin @ si @ Vaz2011b Serial 1835  
Permanent link to this record
 

 
Author Javier Vazquez edit  openurl
  Title Colour Constancy in Natural Through Colour Naming and Sensor Sharpening Type Book Whole
  Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Colour is derived from three physical properties: incident light, object reflectance and sensor sensitivities. Incident light varies under natural conditions; hence, recovering scene illuminant is an important issue in computational colour. One way to deal with this problem under calibrated conditions is by following three steps, 1) building a narrow-band sensor basis to accomplish the diagonal model, 2) building a feasible set of illuminants, and 3) defining criteria to select the best illuminant. In this work we focus on colour constancy for natural images by introducing perceptual criteria in the first and third stages.
To deal with the illuminant selection step, we hypothesise that basic colour categories can be used as anchor categories to recover the best illuminant. These colour names are related to the way that the human visual system has evolved to encode relevant natural colour statistics. Therefore the recovered image provides the best representation of the scene labelled with the basic colour terms. We demonstrate with several experiments how this selection criterion achieves current state-of-art results in computational colour constancy. In addition to this result, we psychophysically prove that usual angular error used in colour constancy does not correlate with human preferences, and we propose a new perceptual colour constancy evaluation.
The implementation of this selection criterion strongly relies on the use of a diagonal
model for illuminant change. Consequently, the second contribution focuses on building an appropriate narrow-band sensor basis to represent natural images. We propose to use the spectral sharpening technique to compute a unique narrow-band basis optimised to represent a large set of natural reflectances under natural illuminants and given in the basis of human cones. The proposed sensors allow predicting unique hues and the World colour Survey data independently of the illuminant by using a compact singularity function. Additionally, we studied different families of sharp sensors to minimise different perceptual measures. This study brought us to extend the spherical sampling procedure from 3D to 6D.
Several research lines still remain open. One natural extension would be to measure the
effects of using the computed sharp sensors on the category hypothesis, while another might be to insert spatial contextual information to improve category hypothesis. Finally, much work still needs to be done to explore how individual sensors can be adjusted to the colours in a scene.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Maria Vanrell;Graham D. Finlayson  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number (down) Admin @ si @ Vaz2011a Serial 1785  
Permanent link to this record
 

 
Author Koen E.A. van de Sande; Jasper Uilings; Theo Gevers; Arnold Smeulders edit  doi
isbn  openurl
  Title Segmentation as Selective Search for Object Recognition Type Conference Article
  Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 1879-1886  
  Keywords  
  Abstract For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1550-5499 ISBN 978-1-4577-1101-5 Medium  
  Area Expedition Conference ICCV  
  Notes ISE Approved no  
  Call Number (down) Admin @ si @ SUG2011 Serial 1780  
Permanent link to this record
 

 
Author E. Serradell; Adriana Romero; R. Leta; Carlo Gatta; Francesc Moreno-Noguer edit  url
openurl 
  Title Simultaneous Correspondence and Non-Rigid 3D Reconstruction of the Coronary Tree from Single X-Ray Images Type Conference Article
  Year 2011 Publication 13th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages 850-857  
  Keywords  
  Abstract  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes MILAB Approved no  
  Call Number (down) Admin @ si @ SRL2011 Serial 1803  
Permanent link to this record
 

 
Author Yainuvis Socarras edit  openurl
  Title Image segmentation for improving pedestrian detection Type Report
  Year 2011 Publication CVC Technical Report Abbreviated Journal  
  Volume 167 Issue Pages  
  Keywords  
  Abstract  
  Address Bellaterra (Spain)  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; Approved no  
  Call Number (down) Admin @ si @ Soc2011 Serial 1933  
Permanent link to this record
 

 
Author Koen E.A. van de Sande; Theo Gevers; Cees G.M. Snoek edit  doi
openurl 
  Title Empowering Visual Categorization with the GPU Type Journal Article
  Year 2011 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 13 Issue 1 Pages 60-70  
  Keywords  
  Abstract Visual categorization is important to manage large collections of digital images and video, where textual meta-data is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. Additionally, we also consider power usage as an evaluation metric. In this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem and (3) give the same numerical results. In the experiments on large scale datasets it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number (down) Admin @ si @ SGS2011b Serial 1729  
Permanent link to this record
 

 
Author Albert Ali Salah; Theo Gevers; Nicu Sebe; Alessandro Vinciarelli edit  openurl
  Title Computer Vision for Ambient Intelligence Type Journal Article
  Year 2011 Publication Journal of Ambient Intelligence and Smart Environments Abbreviated Journal JAISE  
  Volume 3 Issue 3 Pages 187-191  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number (down) Admin @ si @ SGS2011a Serial 1725  
Permanent link to this record
 

 
Author Nataliya Shapovalova; Wenjuan Gong; Marco Pedersoli; Xavier Roca; Jordi Gonzalez edit  doi
isbn  openurl
  Title On Importance of Interactions and Context in Human Action Recognition Type Conference Article
  Year 2011 Publication 5th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 6669 Issue Pages 58-66  
  Keywords  
  Abstract This paper is focused on the automatic recognition of human events in static images. Popular techniques use knowledge of the human pose for inferring the action, and the most recent approaches tend to combine pose information with either knowledge of the scene or of the objects with which the human interacts. Our approach makes a step forward in this direction by combining the human pose with the scene in which the human is placed, together with the spatial relationships between humans and objects. Based on standard, simple descriptors like HOG and SIFT, recognition performance is enhanced when these three types of knowledge are taken into account. Results obtained in the PASCAL 2010 Action Recognition Dataset demonstrate that our technique reaches state-of-the-art results using simple descriptors and classifiers.  
  Address Las Palmas de Gran Canaria. Spain  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor J. Vitria, J.M. Sanches, and M. Hernandez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-21256-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes ISE Approved no  
  Call Number (down) Admin @ si @ SGP2011 Serial 1750  
Permanent link to this record
 

 
Author Nataliya Shapovalova; Carles Fernandez; Xavier Roca; Jordi Gonzalez edit  doi
isbn  openurl
  Title Semantics of Human Behavior in Image Sequences Type Book Chapter
  Year 2011 Publication Computer Analysis of Human Behavior Abbreviated Journal  
  Volume Issue 7 Pages 151-182  
  Keywords  
  Abstract Human behavior is contextualized and understanding the scene of an action is crucial for giving proper semantics to behavior. In this chapter we present a novel approach for scene understanding. The emphasis of this work is on the particular case of Human Event Understanding. We introduce a new taxonomy to organize the different semantic levels of the Human Event Understanding framework proposed. Such a framework particularly contributes to the scene understanding domain by (i) extracting behavioral patterns from the integrative analysis of spatial, temporal, and contextual evidence and (ii) integrative analysis of bottom-up and top-down approaches in Human Event Understanding. We will explore how the information about interactions between humans and their environment influences the performance of activity recognition, and how this can be extrapolated to the temporal domain in order to extract higher inferences from human events observed in sequences of images.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor Albert Ali Salah;  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-0-85729-993-2 Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number (down) Admin @ si @ SFR2011 Serial 1810  
Permanent link to this record
 

 
Author Santiago Segui edit  openurl
  Title Contributions to the Diagnosis of Intestinal Motility by Automatic Image Analysis Type Book Whole
  Year 2011 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In the early twenty first century Given Imaging Ltd. presented wireless capsule endoscopy (WCE) as a new technological breakthrough that allowed the visualization of
the intestine by using a small, swallowed camera. This small size device was received
with a high enthusiasm within the medical community, and until now, it is still one
of the medical devices with the highest use growth rate. WCE can be used as a novel
diagnostic tool that presents several clinical advantages, since it is non-invasive and
at the same time it provides, for the first time, a full picture of the small bowel morphology, contents and dynamics. Since its appearance, the WCE has been used to
detect several intestinal dysfunctions such as: polyps, ulcers and bleeding. However,
the visual analysis of WCE videos presents an important drawback: the long time
required by the physicians for proper video visualization. In this sense and regarding
to this limitation, the development of computer aided systems is required for the extensive use of WCE in the medical community.
The work presented in this thesis is a set of contributions for the automatic image
analysis and computer-aided diagnosis of intestinal motility disorders using WCE.
Until now, the diagnosis of small bowel motility dysfunctions was basically performed
by invasive techniques such as the manometry test, which can only be conducted at
some referral centers around the world owing to the complexity of the procedure and
the medial expertise required in the interpretation of the results.
Our contributions are divided in three main blocks:
1. Image analysis by computer vision techniques to detect events in the endoluminal WCE scene. Several methods have been proposed to detect visual events
such as: intestinal contractions, intestinal content, tunnel and wrinkles;
2. Machine learning techniques for the analysis and the manipulation of the data
from WCE. These methods have been proposed in order to overcome the problems that the analysis of WCE presents such as: video acquisition cost, unlabeled data and large number of data;
3. Two different systems for the computer-aided diagnosis of intestinal motility
disorders using WCE. The first system presents a fully automatic method that
aids at discriminating healthy subjects from patients with severe intestinal motor disorders like pseudo-obstruction or food intolerance. The second system presents another automatic method that models healthy subjects and discriminate them from mild intestinal motility patients.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Vitria  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number (down) Admin @ si @ Seg2011 Serial 1836  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: