Robust Structural Sparse Tracking

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Tianzhu Zhang, Changsheng Xu, Ming-Hsuan Yang
Journal/Conference Name IEEE Transactions on Pattern Analysis and Machineā€¦
Paper Category
Paper Abstract Sparse representations have been applied to visual tracking by finding the best candidate region with minimal reconstruction error based on a set of target templates. However, most existing sparse trackers only consider holistic or local representations and do not make full use of the intrinsic structure among and inside target candidate regions, thereby making them less effective when similar objects appear at close proximity or under occlusion. In this paper, we propose a novel structural sparse representation, which not only exploits the intrinsic relationships among target candidate regions and local patches to learn their representations jointly, but also preserves the spatial structure among the local patches inside each target candidate region. For robust visual tracking, we take outliers resulting from occlusion and noise into account when searching for the best target region. Constructed within a Bayesian filtering framework, we show that the proposed algorithm accommodates most existing sparse trackers with respective merits. The formulated problem can be efficiently solved using an accelerated proximal gradient method that yields a sequence of closed form updates. Qualitative and quantitative evaluations on challenging benchmark datasets demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods.
Date of publication 2018
Code Programming Language Python
Comment

Copyright Researcher 2021