Improving Sample Efficiency with Normalized RBF Kernels

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors David Obando-Paniagua, Randolf Scholz, Philip Kurzendörfer, Friedemann Schestag, Sebastian Pineda-Arango, Alperen Dedeoglu
Journal/Conference Name arXiv preprint
Paper Category
Paper Abstract In deep learning models, learning more with less data is becoming more important. This paper explores how neural networks with normalized Radial Basis Function (RBF) kernels can be trained to achieve better sample efficiency. Moreover, we show how this kind of output layer can find embedding spaces where the classes are compact and well-separated. In order to achieve this, we propose a two-phase method to train those type of neural networks on classification tasks. Experiments on CIFAR-10 and CIFAR-100 show that networks with normalized kernels as output layer can achieve higher sample efficiency, high compactness and well-separability through the presented method in comparison to networks with SoftMax output layer.
Date of publication 2020
Code Programming Language Python
Comment

Copyright Researcher 2022