Home | [71–80] << 81 82 83 84 85 86 87 88 89 90 >> [91–100] |
Records | |||||
---|---|---|---|---|---|
Author | Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab | ||||
Title | Calibration-free Gaze Estimation using Human Gaze Patterns | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 137-144 | ||
Keywords | |||||
Abstract | We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. | ||||
Address | Sydney | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ AGV2013 | Serial | 2365 | ||
Permanent link to this record | |||||
Author | Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers | ||||
Title | Like Father, Like Son: Facial Expression Dynamics for Kinship Verification | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1497-1504 | ||
Keywords | |||||
Abstract | Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. | ||||
Address | Sydney | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ DSG2013 | Serial | 2366 | ||
Permanent link to this record | |||||
Author | Jorge Bernal | ||||
Title | Use of Projection and Back-projection Methods in Bidimensional Computed Tomography Image Reconstruction | Type | Report | ||
Year | 2009 | Publication | CVC Tecnical Report | Abbreviated Journal | |
Volume | 141 | Issue | Pages | ||
Keywords | Projection, Back-projection, CT scan, Euclidean geometry, Radon transform | ||||
Abstract | One of the biggest drawbacks related to the use of CT scanners is the cost (in memory and in time) associated. In this project many methods to simulate their functioning, but in a more feasible way (taking an industrial point of view), will be studied.
The main group of techniques that are being used are the one entitled as ’back-projection’. The concept behind is to simulate the X ray emission in CT scans by lines that cross with the image we want to reconstruct. In the first part of this document euclidean geometry is used to face the tasks of projec- tion and back-projection. After analysing the results achieved it has been proved that this approach does not lead to a fully perfect reconstruction (and also has some other problems related to running time and memory cost). Because of this in the second part of the document ’Filtered Back-projection’ method is introduced in order to improve the results. Filtered Back-projection methods rely on mathematical transforms (Fourier, Radon) in order to provide more accurate results that can be obtained in much less time. The main cause of this better results is the use of a filtering process before the back-projection in order to avoid high frequency-caused errors. As a result of this project two different implementations (one for each approach) had been implemented in order to compare their performance. |
||||
Address | |||||
Corporate Author | Computer Vision Center | Thesis | Master's thesis | ||
Publisher | Place of Publication | Barcelona, Spain | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV; | Approved | no | ||
Call Number | IAM @ iam @ Ber2009 | Serial | 1693 | ||
Permanent link to this record | |||||
Author | Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders | ||||
Title | Selective Search for Object Recognition | Type | Journal Article | ||
Year | 2013 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 104 | Issue | 2 | Pages | 154-171 |
Keywords | |||||
Abstract | This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html). | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ USG2013 | Serial | 2362 | ||
Permanent link to this record | |||||
Author | Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers | ||||
Title | Joint Attention by Gaze Interpolation and Saliency | Type | Journal | ||
Year | 2013 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | T-CIBER |
Volume | 43 | Issue | 3 | Pages | 829-842 |
Keywords | |||||
Abstract | Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ YSM2013 | Serial | 2363 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; F. Javier Sanchez; Fernando Vilariño | ||||
Title | A Region Segmentation Method for Colonoscopy Images Using a Model of Polyp Appearance | Type | Conference Article | ||
Year | 2011 | Publication | 5th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 6669 | Issue | Pages | 134-143 | |
Keywords | Colonoscopy, Polyp Detection, Region Merging, Region Segmentation. | ||||
Abstract | This work aims at the segmentation of colonoscopy images into a minimum number of informative regions. Our method performs in a way such, if a polyp is present in the image, it will be exclusively and totally contained in a single region. This result can be used in later stages to classify regions as polyp-containing candidates. The output of the algorithm also defines which regions can be considered as non-informative. The algorithm starts with a high number of initial regions and merges them taking into account the model of polyp appearance obtained from available data. The results show that our segmentations of polyp regions are more accurate than state-of-the-art methods. | ||||
Address | Las Palmas de Gran Canaria, June 2011 | ||||
Corporate Author | SpringerLink | Thesis | |||
Publisher | Place of Publication | Editor | Vitrià, Jordi and Sanches, João and Hernández, Mario | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Lecture Notes in Computer Science | Abbreviated Series Title | LNCS | |
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-642-21256-7 | Medium | ||
Area | 800 | Expedition | Conference | IbPRIA | |
Notes | MV;SIAI | Approved | no | ||
Call Number | IAM @ iam @ BSV2011c | Serial | 1696 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; F. Javier Sanchez; Fernando Vilariño | ||||
Title | Depth of Valleys Accumulation Algorithm for Object Detection | Type | Conference Article | ||
Year | 2011 | Publication | 14th Congrès Català en Intel·ligencia Artificial | Abbreviated Journal | |
Volume | 1 | Issue | 1 | Pages | 71-80 |
Keywords | Object Recognition, Object Region Identification, Image Analysis, Image Processing | ||||
Abstract | This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas. | ||||
Address | Lleida | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-60750-841-0 | Medium | ||
Area | 800 | Expedition | Conference | CCIA | |
Notes | MV;SIAI | Approved | no | ||
Call Number | IAM @ iam @ BSV2011b | Serial | 1699 | ||
Permanent link to this record | |||||
Author | Panagiota Spyridonos; Fernando Vilariño; Jordi Vitria; Petia Radeva; Fernando Azpiroz; Juan Malagelada | ||||
Title | Device, system and method for automatic detection of contractile activity in an image frame | Type | Patent | ||
Year | 2011 | Publication | US 2011/0044515 A1 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | A device, system and method for automatic detection of contractile activity of a body lumen in an image frame is provided, wherein image frames during contractile activity are captured and/or image frames including contractile activity are automatically detected, such as through pattern recognition and/or feature extraction to trace image frames including contractions, e.g., with wrinkle patterns. A manual procedure of annotation of contractions, e.g. tonic contractions in capsule endoscopy, may consist of the visualization of the whole video by a specialist, and the labeling of the contraction frames. Embodiments of the present invention may be suitable for implementation in an in vivo imaging system. | ||||
Address | Pearl Cohen Zedek Latzer, LLP, 1500 Broadway 12th Floor, New York (NY) 10036 (US) | ||||
Corporate Author | US Patent Office | Thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | IAM @ iam @ SVV2011 | Serial | 1701 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada | ||||
Title | Method for automatic classification of in vivo images | Type | Patent | ||
Year | 2010 | Publication | US 2010/0046816 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | A method for automatically detecting a post-duodenal boundary in an image stream of the gastrointestinal (GI) tract. The image stream is sampled to obtain a reduced set of images for processing. The reduced set of images is filtered to remove non-valid frames or non-valid portions of frames, thereby generating a filtered set of valid images. A polar representation of the valid images is generated. Textural features of the polar representation are processed to detect the post-duodenal boundary of the GI tract. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | IAM @ iam @ VSR2010 | Serial | 1702 | ||
Permanent link to this record | |||||
Author | Gerard Lacey; Fernando Vilariño | ||||
Title | Endoscopy system with motion sensors | Type | Patent | ||
Year | 2011 | Publication | US 2011/0032347 A1 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | An endoscopy system (1) comprises an endoscope (2) with a camera (3) at its tip. The endoscope extends through an endoscope guide (4) for guiding movement of the endoscope and for measurement of its movement as it enters the body. The guide (4) comprises a generally conical body (5) having a through passage (105) through which the endoscope (2) extends. A motion sensor comprises an optical transmitter (7) and a detector (8) mounted alongside the passage (105) to measure the insertion-withdrawal linear motion and also rotation of the endoscope by the endoscopist's hand. The system (1) also comprises a flexure controller (10) having wheels operated by the endoscopist. The camera (3), the motion sensor (7/8), and the flexure controller (10) are all connected to a processor (11) which feeds a display. | ||||
Address | Jacobson Holman PPLC; 400 Seventh Street, N.W. Suite 600; Whashington DC 20004 DC | ||||
Corporate Author | USPTO | Thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | IAM @ iam @ LaV2011 | Serial | 1703 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Panagiota Spyridonos; Petia Radeva; Jordi Vitria; Fernando Azpiroz; Juan Malagelada | ||||
Title | Device, system and method for measurement and analysis of contractile activity | Type | Patent | ||
Year | 2009 | Publication | US 2009/0202117 A1 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | A method and system for determining intestinal dysfunction condition are provided by classifying and analyzing image frames captured in-vivo. The method and system also relate to the detection of contractile activity in intestinal tracts, to automatic detection of video image frames taken in the gastrointestinal tract including contractile activity, and more particularly to measurement and analysis of contractile activity of the GI tract based on image intensity of in vivo image data. | ||||
Address | Pearl Cohen Zedek Latzer | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | IAM @ iam @ VSR2009 | Serial | 1704 | ||
Permanent link to this record | |||||
Author | Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez | ||||
Title | Video Alignment for Change Detection | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 20 | Issue | 7 | Pages | 1858-1869 |
Keywords | video alignment | ||||
Abstract | In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; IF | Approved | no | ||
Call Number | DPS 2011; ADAS @ adas @ dps2011 | Serial | 1705 | ||
Permanent link to this record | |||||
Author | Marco Pedersoli; Jordi Gonzalez; Andrew Bagdanov; Xavier Roca | ||||
Title | Efficient Discriminative Multiresolution Cascade for Real-Time Human Detection Applications | Type | Journal Article | ||
Year | 2011 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 32 | Issue | 13 | Pages | 1581-1587 |
Keywords | |||||
Abstract | Human detection is fundamental in many machine vision applications, like video surveillance, driving assistance, action recognition and scene understanding. However in most of these applications real-time performance is necessary and this is not achieved yet by current detection methods.
This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a linear Support Vector Machine (SVM) composed of HOG features at different resolutions, from coarse at the first level to fine at the last one. In contrast to previous methods, our approach uses a non-uniform stride of the sliding window that is defined by the feature resolution and allows the detection to be incrementally refined as going from coarse-to-fine resolution. In this way, the speed-up of the cascade is not only due to the fewer number of features computed at the first levels of the cascade, but also to the reduced number of windows that need to be evaluated at the coarse resolution. Experimental results show that our method reaches a detection rate comparable with the state-of-the-art of detectors based on HOG features, while at the same time the detection search is up to 23 times faster. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ PGB2011a | Serial | 1707 | ||
Permanent link to this record | |||||
Author | Palaiahnakote Shivakumara; Anjan Dutta; Trung Quy Phan; Chew Lim Tan; Umapada Pal | ||||
Title | A Novel Mutual Nearest Neighbor based Symmetry for Text Frame Classification in Video | Type | Journal Article | ||
Year | 2011 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 44 | Issue | 8 | Pages | 1671-1683 |
Keywords | |||||
Abstract | In the field of multimedia retrieval in video, text frame classification is essential for text detection, event detection, event boundary detection, etc. We propose a new text frame classification method that introduces a combination of wavelet and median moment with k-means clustering to select probable text blocks among 16 equally sized blocks of a video frame. The same feature combination is used with a new Max–Min clustering at the pixel level to choose probable dominant text pixels in the selected probable text blocks. For the probable text pixels, a so-called mutual nearest neighbor based symmetry is explored with a four-quadrant formation centered at the centroid of the probable dominant text pixels to know whether a block is a true text block or not. If a frame produces at least one true text block then it is considered as a text frame otherwise it is a non-text frame. Experimental results on different text and non-text datasets including two public datasets and our own created data show that the proposed method gives promising results in terms of recall and precision at the block and frame levels. Further, we also show how existing text detection methods tend to misclassify non-text frames as text frames in term of recall and precision at both the block and frame levels. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ SDP2011 | Serial | 1727 | ||
Permanent link to this record | |||||
Author | Victor Ponce; Sergio Escalera; Xavier Baro | ||||
Title | Multi-modal Social Signal Analysis for Predicting Agreement in Conversation Settings | Type | Conference Article | ||
Year | 2013 | Publication | 15th ACM International Conference on Multimodal Interaction | Abbreviated Journal | |
Volume | Issue | Pages | 495-502 | ||
Keywords | |||||
Abstract | In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions. | ||||
Address | Sidney; Australia; December 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-2129-7 | Medium | ||
Area | Expedition | Conference | ICMI | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ PEB2013 | Serial | 2488 | ||
Permanent link to this record |