|
Records |
Links |
|
Author |
Huamin Ren; Nattiya Kanhabua; Andreas Mogelmose; Weifeng Liu; Kaustubh Kulkarni; Sergio Escalera; Xavier Baro; Thomas B. Moeslund |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Back-dropout Transfer Learning for Action Recognition |
Type |
Journal Article |
|
Year |
2018 |
Publication |
IET Computer Vision |
Abbreviated Journal |
IETCV |
|
|
Volume |
12 |
Issue |
4 |
Pages |
484-491 |
|
|
Keywords |
Learning (artificial intelligence); Pattern Recognition |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
Transfer learning aims at adapting a model learned from source dataset to target dataset. It is a beneficial approach especially when annotating on the target dataset is expensive or infeasible. Transfer learning has demonstrated its powerful learning capabilities in various vision tasks. Despite transfer learning being a promising approach, it is still an open question how to adapt the model learned from the source dataset to the target dataset. One big challenge is to prevent the impact of category bias on classification performance. Dataset bias exists when two images from the same category, but from different datasets, are not classified as the same. To address this problem, a transfer learning algorithm has been proposed, called negative back-dropout transfer learning (NB-TL), which utilizes images that have been misclassified and further performs back-dropout strategy on them to penalize errors. Experimental results demonstrate the effectiveness of the proposed algorithm. In particular, the authors evaluate the performance of the proposed NB-TL algorithm on UCF 101 action recognition dataset, achieving 88.9% recognition rate. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ RKM2018 |
Serial |
3071 |
|
Permanent link to this record |
|
|
|
|
Author |
Stefan Schurischuster; Beatriz Remeseiro; Petia Radeva; Martin Kampel |
![goto web page url](img/www.gif)
|
|
Title |
A Preliminary Study of Image Analysis for Parasite Detection on Honey Bees |
Type |
Conference Article |
|
Year |
2018 |
Publication |
15th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
10882 |
Issue |
|
Pages |
465-473 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
Varroa destructor is a parasite harming bee colonies. As the worldwide bee population is in danger, beekeepers as well as researchers are looking for methods to monitor the health of bee hives. In this context, we present a preliminary study to detect parasites on bee videos by means of image analysis and machine learning techniques. For this purpose, each video frame is analyzed individually to extract bee image patches, which are then processed to compute image descriptors and finally classified into mite and no mite bees. The experimental results demonstrated the adequacy of the proposed method, which will be a perfect stepping stone for a further bee monitoring system. |
|
|
Address |
Povoa de Varzim; Portugal; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ SRR2018a |
Serial |
3110 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohamed Ilyes Lakhal; Hakan Cevikalp; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
CRN: End-to-end Convolutional Recurrent Network Structure Applied to Vehicle Classification |
Type |
Conference Article |
|
Year |
2018 |
Publication |
13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
5 |
Issue |
|
Pages |
137-144 |
|
|
Keywords |
Vehicle Classification; Deep Learning; End-to-end Learning |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
Vehicle type classification is considered to be a central part of Intelligent Traffic Systems. In the recent years, deep learning methods have emerged in as being the state-of-the-art in many computer vision tasks. In this paper, we present a novel yet simple deep learning framework for the vehicle type classification problem. We propose an end-to-end trainable system, that combines convolution neural network for feature extraction and recurrent neural network as a classifier. The recurrent network structure is used to handle various types of feature inputs, and at the same time allows to produce a single or a set of class predictions. In order to assess the effectiveness of our solution, we have conducted a set of experiments in two public datasets, obtaining state of the art results. In addition, we also report results on the newly released MIO-TCD dataset. |
|
|
Address |
Funchal; Madeira; Portugal; January 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
HUPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ LCE2018a |
Serial |
3094 |
|
Permanent link to this record |
|
|
|
|
Author |
Esmitt Ramirez; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
BronchoX: bronchoscopy exploration software for biopsy intervention planning |
Type |
Journal |
|
Year |
2018 |
Publication |
Healthcare Technology Letters |
Abbreviated Journal |
HTL |
|
|
Volume |
5 |
Issue |
5 |
Pages |
177–182 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea. |
|
|
Address |
|
|
|
Corporate Author |
rank (SJR) |
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM; 600.096; 600.075; 601.323; 601.337; 600.145 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSB2018a |
Serial |
3132 |
|
Permanent link to this record |
|
|
|
|
Author |
Yaxing Wang; Joost Van de Weijer; Luis Herranz |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Mix and match networks: encoder-decoder alignment for zero-pair image translation |
Type |
Conference Article |
|
Year |
2018 |
Publication |
31st IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
5467 - 5476 |
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
LAMP; 600.109; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ WWH2018b |
Serial |
3131 |
|
Permanent link to this record |
|
|
|
|
Author |
Stefan Lonn; Petia Radeva; Mariella Dimiccoli |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A picture is worth a thousand words but how to organize thousands of pictures? |
Type |
Miscellaneous |
|
Year |
2018 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We live in a society where the large majority of the population has a camera-equipped smartphone. In addition, hard drives and cloud storage are getting cheaper and cheaper, leading to a tremendous growth in stored personal photos. Unlike photo collections captured by a digital camera, which typically are pre-processed by the user who organizes them into event-related folders, smartphone pictures are automatically stored in the cloud. As a consequence, photo collections captured by a smartphone are highly unstructured and because smartphones are ubiquitous, they present a larger variability compared to pictures captured by a digital camera. To solve the need of organizing large smartphone photo collections automatically, we propose here a new methodology for hierarchical photo organization into topics and topic-related categories. Our approach successfully estimates latent topics in the pictures by applying probabilistic Latent Semantic Analysis, and automatically assigns a name to each topic by relying on a lexical database. Topic-related categories are then estimated by using a set of topic-specific Convolutional Neuronal Networks. To validate our approach, we ensemble and make public a large dataset of more than 8,000 smartphone pictures from 10 persons. Experimental results demonstrate better user satisfaction with respect to state of the art solutions in terms of organization. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ LRD2018 |
Serial |
3111 |
|
Permanent link to this record |
|
|
|
|
Author |
Mark Philip Philipsen; Jacob Velling Dueholm; Anders Jorgensen; Sergio Escalera; Thomas B. Moeslund |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Organ Segmentation in Poultry Viscera Using RGB-D |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
18 |
Issue |
1 |
Pages |
117 |
|
|
Keywords |
semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ PVJ2018 |
Serial |
3072 |
|
Permanent link to this record |
|
|
|
|
Author |
Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Objects as context for detecting their semantic parts |
Type |
Conference Article |
|
Year |
2018 |
Publication |
31st IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
6907 - 6916 |
|
|
Keywords |
Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare
to other part detection methods on both PASCAL-Part and CUB200-2011 datasets. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
LAMP; 600.109; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GMF2018 |
Serial |
3229 |
|
Permanent link to this record |
|
|
|
|
Author |
Albert Clapes; Alex Pardo; Oriol Pujol; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Machine Vision and Applications |
Abbreviated Journal |
MVAP |
|
|
Volume |
29 |
Issue |
5 |
Pages |
765–788 |
|
|
Keywords |
Multimodal activity detection; Computer vision; Inertial sensors; Dense trajectories; Dynamic time warping; Assistive technology |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF- 2 kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPP2018 |
Serial |
3125 |
|
Permanent link to this record |
|
|
|
|
Author |
Ilke Demir; Dena Bazazian; Adriana Romero; Viktoriia Sharmanska; Lyne P. Tchapmi |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
WiCV 2018: The Fourth Women In Computer Vision Workshop |
Type |
Conference Article |
|
Year |
2018 |
Publication |
4th Women in Computer Vision Workshop |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1941-19412 |
|
|
Keywords |
Conferences; Computer vision; Industries; Object recognition; Engineering profession; Collaboration; Machine learning |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We present WiCV 2018 – Women in Computer Vision Workshop to increase the visibility and inclusion of women researchers in computer vision field, organized in conjunction with CVPR 2018. Computer vision and machine learning have made incredible progress over the past years, yet the number of female researchers is still low both in academia and industry. WiCV is organized to raise visibility of female researchers, to increase the collaboration,
and to provide mentorship and give opportunities to femaleidentifying junior researchers in the field. In its fourth year, we are proud to present the changes and improvements over the past years, summary of statistics for presenters and attendees, followed by expectations from future generations. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
WiCV |
|
|
Notes |
DAG; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DBR2018 |
Serial |
3222 |
|
Permanent link to this record |
|
|
|
|
Author |
Arka Ujjal Dey; Suman Ghosh; Ernest Valveny |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Don't only Feel Read: Using Scene text to understand advertisements |
Type |
Conference Article |
|
Year |
2018 |
Publication |
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks. |
|
|
Address |
Salt Lake City; Utah; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
DAG; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DGV2018 |
Serial |
3551 |
|
Permanent link to this record |
|
|
|
|
Author |
Pau Rodriguez; Josep M. Gonfaus; Guillem Cucurull; Xavier Roca; Jordi Gonzalez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery |
Type |
Conference Article |
|
Year |
2018 |
Publication |
15th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
11212 |
Issue |
|
Pages |
357-372 |
|
|
Keywords |
Deep Learning; Convolutional Neural Networks; Attention |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. It learns to attend to lower-level feature activations without requiring part annotations and uses these activations to update and rectify the output likelihood distribution. In contrast to other approaches, the proposed mechanism is modular, architecture-independent and efficient both in terms of parameters and computation required. Experiments show that networks augmented with our approach systematically improve their classification accuracy and become more robust to clutter. As a result, Wide Residual Networks augmented with our proposal surpasses the state of the art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford dogs, and UEC Food-100. |
|
|
Address |
Munich; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ISE; 600.098; 602.121; 600.119 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGC2018 |
Serial |
3139 |
|
Permanent link to this record |
|
|
|
|
Author |
Xialei Liu; Joost Van de Weijer; Andrew Bagdanov |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Leveraging Unlabeled Data for Crowd Counting by Learning to Rank |
Type |
Conference Article |
|
Year |
2018 |
Publication |
31st IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
7661 - 7669 |
|
|
Keywords |
Task analysis; Training; Computer vision; Visualization; Estimation; Head; Context modeling |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of
cropped images , we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing
datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and queryby-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-ofthe-art results. |
|
|
Address |
Salt Lake City; USA; June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPR |
|
|
Notes |
LAMP; 600.109; 600.106; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ LWB2018 |
Serial |
3159 |
|
Permanent link to this record |
|
|
|
|
Author |
Francisco Cruz; Oriol Ramos Terrades |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A probabilistic framework for handwritten text line segmentation |
Type |
Miscellaneous |
|
Year |
2018 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Document Analysis; Text Line Segmentation; EM algorithm; Probabilistic Graphical Models; Parameter Learning |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
We successfully combine Expectation-Maximization algorithm and variational
approaches for parameter learning and computing inference on Markov random fields. This is a general method that can be applied to many computer
vision tasks. In this paper, we apply it to handwritten text line segmentation.
We conduct several experiments that demonstrate that our method deal with
common issues of this task, such as complex document layout or non-latin
scripts. The obtained results prove that our method achieve state-of-theart performance on different benchmark datasets without any particular fine
tuning step. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.097; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CrR2018 |
Serial |
3253 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Batch-based activity recognition from egocentric photo-streams revisited |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Pattern Analysis and Applications |
Abbreviated Journal |
PAA |
|
|
Volume |
21 |
Issue |
4 |
Pages |
953–965 |
|
|
Keywords |
Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks |
|
|
Abstract ![sorted by Abstract field, ascending order (up)](img/sort_asc.gif) |
Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ CMR2018 |
Serial |
3186 |
|
Permanent link to this record |