toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Lluis Pere de las Heras; Ernest Valveny; Gemma Sanchez edit   pdf
openurl 
  Title (down) Combining structural and statistical strategies for unsupervised wall detection in floor plans Type Conference Article
  Year 2013 Publication 10th IAPR International Workshop on Graphics Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents an evolution of the first unsupervised wall segmentation method in floor plans, that was presented by the authors in [1]. This first approach, contrarily to the existing ones, is able to segment walls independently to their notation and without the need of any pre-annotated data
to learn their visual appearance. Despite the good performance of the first approach, some specific cases, such as curved shaped walls, were not correctly segmented since they do not agree the strict structural assumptions that guide the whole methodology in order to be able to learn, in an unsupervised way, the structure of a wall. In this paper, we refine this strategy by dividing the
process in two steps. In a first step, potential wall segments are extracted unsupervisedly using a modification of [1], by restricting even more the areas considered as walls in a first moment. In a second step, these segments are used to learn and spot lost instances based on a modified version of [2], also presented by the authors. The presented combined method have been tested on
4 datasets with different notations and compared with the stateof-the-art applyed on the same datasets. The results show its adaptability to different wall notations and shapes, significantly outperforming the original approach.
 
  Address Bethlehem; PA; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GREC  
  Notes DAG; 600.045 Approved no  
  Call Number Admin @ si @ HVS2013a Serial 2321  
Permanent link to this record
 

 
Author L. Calvet; A. Ferrer; M. Gomes; A. Juan; David Masip edit   pdf
doi  openurl
  Title (down) Combining Statistical Learning with Metaheuristics for the Multi-Depot Vehicle Routing Problem with Market Segmentation Type Journal Article
  Year 2016 Publication Computers & Industrial Engineering Abbreviated Journal CIE  
  Volume 94 Issue Pages 93-104  
  Keywords Multi-Depot Vehicle Routing Problem; market segmentation applications; hybrid algorithms; statistical learning  
  Abstract In real-life logistics and distribution activities it is usual to face situations in which the distribution of goods has to be made from multiple warehouses or depots to the nal customers. This problem is known as the Multi-Depot Vehicle Routing Problem (MDVRP), and it typically includes two sequential and correlated stages: (a) the assignment map of customers to depots, and (b) the corresponding design of the distribution routes. Most of the existing work in the literature has focused on minimizing distance-based distribution costs while satisfying a number of capacity constraints. However, no attention has been given so far to potential variations in demands due to the tness of the customerdepot mapping in the case of heterogeneous depots. In this paper, we consider this realistic version of the problem in which the depots are heterogeneous in terms of their commercial o er and customers show di erent willingness to consume depending on how well the assigned depot ts their preferences. Thus, we assume that di erent customer-depot assignment maps will lead to di erent customer-expenditure levels. As a consequence, market-segmentation strategiesneed to be considered in order to increase sales and total income while accounting for the distribution costs. To solve this extension of the MDVRP, we propose a hybrid approach that combines statistical learning techniques with a metaheuristic framework. First, a set of predictive models is generated from historical data. These statistical models allow estimating the demand of any customer depending on the assigned depot. Then, the estimated expenditure of each customer is included as part of an enriched objective function as a way to better guide the stochastic local search inside the metaheuristic framework. A set of computational experiments contribute to illustrate our approach and how the extended MDVRP considered here di ers in terms of the proposed solutions from the traditional one.  
  Address  
  Corporate Author Thesis  
  Publisher PERGAMON-ELSEVIER SCIENCE LTD Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title CIE  
  Series Volume Series Issue Edition  
  ISSN 0360-8352 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV; Approved no  
  Call Number Admin @ si @ CFG2016 Serial 2749  
Permanent link to this record
 

 
Author Jaume Garcia; Petia Radeva; Francesc Carreras edit   pdf
openurl 
  Title (down) Combining Spectral and Active Shape methods to Track Tagged MRI Type Book Chapter
  Year 2004 Publication Recent Advances in Artificial Intelligence Research and Development Abbreviated Journal  
  Volume Issue Pages 37-44  
  Keywords MR; tagged MR; ASM; LV segmentation; motion estimation.  
  Abstract Tagged magnetic resonance is a very usefull and unique tool that provides a complete local and global knowledge of the left ventricle (LV) motion. In this article we introduce a method capable of tracking and segmenting the LV. Spectral methods are applied in order to obtain the so called HARP images which encode information about movement and are the base for LV point-tracking. For segmentation we use Active Shapes (ASM) that model LV shape variation in order to overcome possible local misplacements of the boundary. We finally show experiments on both synthetic and real data which appear to be very promising.  
  Address  
  Corporate Author Thesis  
  Publisher IOS Press Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CCIA  
  Notes IAM;MILAB Approved no  
  Call Number IAM @ iam @ GRC2004 Serial 1488  
Permanent link to this record
 

 
Author Ernest Valveny; Ricardo Toledo; Ramon Baldrich; Enric Marti edit  openurl
  Title (down) Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching Type Conference Article
  Year 2002 Publication Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002 Abbreviated Journal  
  Volume Issue Pages 502–507  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG;RV;CAT;IAM;CIC;ADAS Approved no  
  Call Number IAM @ iam @ VTB2002 Serial 1660  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez; Theo Gevers; Felipe Lumbreras edit   pdf
doi  openurl
  Title (down) Combining Priors, Appearance and Context for Road Detection Type Journal Article
  Year 2014 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 15 Issue 3 Pages 1168-1178  
  Keywords Illuminant invariance; lane markings; road detection; road prior; road scene understanding; vanishing point; 3-D scene layout  
  Abstract Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning.
Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076;ISE Approved no  
  Call Number Admin @ si @ ALG2014 Serial 2501  
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz edit   pdf
doi  openurl
  Title (down) Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
  Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 4523-4529  
  Keywords Robotics and Vision; Vision and Perception  
  Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.  
  Address Melbourne; Australia; August 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SJH2017b Serial 2966  
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera edit  doi
openurl 
  Title (down) Combining Local and Global Learners in the Pairwise Multiclass Classification Type Journal Article
  Year 2015 Publication Pattern Analysis and Applications Abbreviated Journal PAA  
  Volume 18 Issue 4 Pages 845-860  
  Keywords Multiclass classification; Pairwise approach; One-versus-one  
  Abstract Pairwise classification is a well-known class binarization technique that converts a multiclass problem into a number of two-class problems, one problem for each pair of classes. However, in the pairwise technique, nuisance votes of many irrelevant classifiers may result in a wrong class prediction. To overcome this problem, a simple, but efficient method is proposed and evaluated in this paper. The proposed method is based on excluding some classes and focusing on the most probable classes in the neighborhood space, named Local Crossing Off (LCO). This procedure is performed by employing a modified version of standard K-nearest neighbor and large margin nearest neighbor algorithms. The LCO method takes advantage of nearest neighbor classification algorithm because of its local learning behavior as well as the global behavior of powerful binary classifiers to discriminate between two classes. Combining these two properties in the proposed LCO technique will avoid the weaknesses of each method and will increase the efficiency of the whole classification system. On several benchmark datasets of varying size and difficulty, we found that the LCO approach leads to significant improvements using different base learners. The experimental results show that the proposed technique not only achieves better classification accuracy in comparison to other standard approaches, but also is computationally more efficient for tackling classification problems which have a relatively large number of target classes.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-7541 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2014 Serial 2441  
Permanent link to this record
 

 
Author Xavier Boix; Josep M. Gonfaus; Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Marco Pedersoli; Jordi Gonzalez; Joan Serrat edit  openurl
  Title (down) Combining local and global bag-of-word representations for semantic segmentation Type Conference Article
  Year 2009 Publication Workshop on The PASCAL Visual Object Classes Challenge Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Kyoto (Japan)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ BGS2009 Serial 1273  
Permanent link to this record
 

 
Author Arnau Ramisa; Alex Goldhoorn; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras edit  doi
openurl 
  Title (down) Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas Type Journal Article
  Year 2011 Publication Journal of Intelligent and Robotic Systems Abbreviated Journal JIRC  
  Volume 64 Issue 3-4 Pages 625-649  
  Keywords  
  Abstract Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Netherlands Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0921-0296 ISBN Medium  
  Area Expedition Conference  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ RGA2011 Serial 1728  
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen edit   pdf
doi  openurl
  Title (down) Combining Holistic and Part-based Deep Representations for Computational Painting Categorization Type Conference Article
  Year 2016 Publication 6th International Conference on Multimedia Retrieval Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.  
  Address New York; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICMR  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ RKW2016 Serial 2763  
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers edit  doi
openurl 
  Title (down) Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
  Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 21 Issue 2 Pages 802-815  
  Keywords  
  Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ VaG 2012b Serial 1851  
Permanent link to this record
 

 
Author Simone Balocco; Carlo Gatta; Francesco Ciompi; Oriol Pujol; Xavier Carrillo; J. Mauri; Petia Radeva edit  doi
isbn  openurl
  Title (down) Combining Growcut and Temporal Correlation for IVUS Lumen Segmentation Type Conference Article
  Year 2011 Publication 5th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 6669 Issue Pages 556-563  
  Keywords  
  Abstract The assessment of arterial luminal area, performed by IVUS analysis, is a clinical index used to evaluate the degree of coronary artery disease. In this paper we propose a novel approach to automatically segment the vessel lumen, which combines model-based temporal information extracted from successive frames of the sequence, with spatial classification using the Growcut algorithm. The performance of the method is evaluated by an in vivo experiment on 300 IVUS frames. The automatic and manual segmentation performances in general vessel and stent frames are comparable. The average segmentation error in vessel, stent and bifurcation frames are 0.17±0.08 mm, 0.18±0.07 mm and 0.31±0.12 mm respectively.  
  Address Las Palmas de Gran Canaria. Spain  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Berlin Editor Jordi Vitria; Joao Miguel Raposo; Mario Hernandez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-21256-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ BGC2011a Serial 1741  
Permanent link to this record
 

 
Author Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier edit  doi
isbn  openurl
  Title (down) Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images Type Conference Article
  Year 2014 Publication 11th IAPR International Workshop on Document Analysis and Systems Abbreviated Journal  
  Volume Issue Pages 181 - 185  
  Keywords  
  Abstract Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.  
  Address Tours; France; April 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4799-3243-6 Medium  
  Area Expedition Conference DAS  
  Notes DAG; 601.223; 600.077 Approved no  
  Call Number Admin @ si @ RCO2014a Serial 2545  
Permanent link to this record
 

 
Author Jose Manuel Alvarez edit  isbn
openurl 
  Title (down) Combining Context and Appearance for Road Detection Type Book Whole
  Year 2010 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Road traffic crashes have become a major cause of death and injury throughout the world.
Hence, in order to improve road safety, the automobile manufacture is moving towards the
development of vehicles with autonomous functionalities such as keeping in the right lane, safe distance keeping between vehicles or regulating the speed of the vehicle according to the traffic conditions. A key component of these systems is vision–based road detection that aims to detect the free road surface ahead the moving vehicle. Detecting the road using a monocular vision system is very challenging since the road is an outdoor scenario imaged from a mobile platform. Hence, the detection algorithm must be able to deal with continuously changing imaging conditions such as the presence ofdifferent objects (vehicles, pedestrians), different environments (urban, highways, off–road), different road types (shape, color), and different imaging conditions (varying illumination, different viewpoints and changing weather conditions). Therefore, in this thesis, we focus on vision–based road detection using a single color camera. More precisely, we first focus on analyzing and grouping pixels according to their low–level properties. In this way, two different approaches are presented to exploit
color and photometric invariance. Then, we focus the research of the thesis on exploiting context information. This information provides relevant knowledge about the road not using pixel features from road regions but semantic information from the analysis of the scene.
In this way, we present two different approaches to infer the geometry of the road ahead
the moving vehicle. Finally, we focus on combining these context and appearance (color)
approaches to improve the overall performance of road detection algorithms. The qualitative and quantitative results presented in this thesis on real–world driving sequences show that the proposed method is robust to varying imaging conditions, road types and scenarios going beyond the state–of–the–art.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;Theo Gevers  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-937261-8-8 Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ Alv2010 Serial 1454  
Permanent link to this record
 

 
Author Fadi Dornaika; Francisco Javier Orozco; Jordi Gonzalez edit  openurl
  Title (down) Combined Head, Lips, Eyebrows, and Eyelids Tracking Using Adaptive Appearance Models Type Book Chapter
  Year 2006 Publication IV Conference on Articulated Motion and Deformable Objects (AMDO´06), LNCS 4069: 110–119 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Mallorca (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number ISE @ ise @ DOG2006 Serial 687  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: