Learning a Part-based Pedestrian Detector in Virtual World

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Jiaolong Xu, David Vázquez, A. Peña, J. Marín, D. Ponsa
Journal/Conference Name I
Paper Category
Paper Abstract Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last 15 years. Among them, the so-called (deformable) part-based classifiers, including multiview modeling, are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, we first perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework, which also allows incorporating real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board data sets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent support vector machine.
Date of publication 2014
Code Programming Language Python
Comment

Copyright Researcher 2022