Robust Visual Knowledge Transfer via Extreme Learning Machine based Domain Adaptation

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Lei Zhang, and David Zhang
Journal/Conference Name IEEE Transactions on Image Processing
Paper Category
Paper Abstract We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the ℓ2,1-norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
Date of publication 2016
Code Programming Language MATLAB
Comment

Copyright Researcher 2021