A General Kernelization Framework for Learning Algorithms Based on Kernel PCA

View Researcher II's Other Codes

Disclaimer: “The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).”

Please contact us in case of a broken link from here

Authors Changshui Zhang, Feiping Nie*, Shiming Xiang
Journal/Conference Name Neurocomputing
Paper Category
Paper Abstract In this paper, a general kernelization framework for learning algorithms is proposed via a two-stage procedure, i.e., transforming data by kernel principal component analysis (KPCA), and then directly performing the learning algorithm with the transformed data. It is worth noting that although a very few learning algorithms were also kernelized by this procedure before, why and under what condition this procedure is feasible have not been further studied. In this paper, we explicitly present this kernelization framework, and give a rigorous justification to reveal that under some mild conditions, the kernelization under this framework is equivalent to traditional kernel method. We show that these mild conditions are usually satisfied in most of learning algorithms. Therefore, most of learning algorithms can be kernelized under this framework without having to reformulate it into inner product form, which is a common yet vital step in traditional kernel methods. Enlightened by this framework, we also propose a novel kernel method based on the low-rank KPCA, which could be used to remove the noise in the feature space, speed up the kernel algorithm and improve the numerical stability for the kernel algorithm. Experiments are presented to verify the validity and effectiveness of the proposed methods.
Date of publication 2010
Code Programming Language MATLAB
Comment

Copyright Researcher II 2021