Deep Neural Nets with Interpolating Function as Output Activation

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Zuoqiang Shi, Xiyang Luo, Stanley J. Osher, Bao Wang, Wei Zhu, Zhen Li
Journal/Conference Name NeurIPS 2018 12
Paper Category
Paper Abstract We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and code will be made publicly available.
Date of publication 2018
Code Programming Language Python

Copyright Researcher 2022