Real-time ‘Actor-Critic’ Tracking

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Boyu Chen, Dong Wang, Peixia Li, Shuang Wang, and Huchuan Lu
Journal/Conference Name European Conference on Computer Vision
Paper Category
Paper Abstract In this work, we propose a novel tracking algorithm with real-time performance based on the ‘Actor-Critic’ framework. This framework consists of two major components: ‘Actor’ and ‘Critic’. The ‘Actor’ model aims to infer the optimal choice in a continuous action space, which directly makes the tracker move the bounding box to the object’s location in the current frame. For offline training, the ‘Critic’ model is introduced to form a ‘Actor-Critic’ framework with reinforcement learning and outputs a Q-value to guide the learning process of both ‘Actor’ and ‘Critic’ deep networks. Then, we modify the original deep deterministic policy gradient algorithm to effectively train our ‘Actor-Critic’ model for the tracking task. For online tracking, the ‘Actor’ model provides a dynamic search strategy to locate the tracked object efficiently and the ‘Critic’ model acts as a verification module to make our tracker more robust. To the best of our knowledge, this work is the first attempt to exploit the continuous action and ‘Actor-Critic’ framework for visual tracking. Extensive experimental results on popular benchmarks demonstrate that the proposed tracker performs favorably against many state-of-the-art methods, with real-time performance.
Date of publication 2018
Code Programming Language Python
Comment

Copyright Researcher 2021