Robust Visual Knowledge Transfer via EDA

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Lei Zhang, and David Zhang
Journal/Conference Name IEEE Transactions on Image Processing
Paper Category
Paper Abstract We address the problem of visual knowledge adaptation by leveraging labeled patterns from the source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. We introduce a new semi-supervised cross-domain network learning framework, referred to as Extreme Domain Adaptation (EDA), that allows us to simultaneously learn a category transformation and an extreme classifier by minimizing the , -norm of the output weights and the learning error, in which the network output weights can be analytically determined. The unlabeled target data, as useful knowledge, is also learned as a fidelity term by minimizing the matching error between the extreme classifier and a base classifier to guarantee the stability during cross domain learning, into which many existing classifiers can be readily incorporated as base classifiers. Additionally, a manifold regularization with Laplacian graph is incorporated into EDA, such that it is beneficial to semi-supervised learning. Under the EDA, we also propose an extensive model learned with multiple views. Experiments on three visual data sets for video event recognition and object recognition, respectively, demonstrate that our EDA outperforms existing cross-domain learning methods.
Date of publication 2016
Code Programming Language MATLAB
Comment

Copyright Researcher 2021