Improving the Generalization Ability of RBNN Using a Selective Strategy Based on the Gaussian Kernel Function

Authors

  • José M. Valls
  • Inés M. Galván
  • Pedro Isasi

Keywords:

Radial Basis Neural Networks, generalization ability, selective learning, kernel functions

Abstract

Radial Basis Neural Networks have been successfully used in many applications due, mainly, to their fast convergence properties. However, the level of generalization is heavily dependent on the quality of the training data. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. In this paper, a learning method is presented, that automatically selects the training patterns more appropriate to the new test sample. The method follows a selective learning strategy, in the sense that it builds approximations centered around the novel sample. This training method uses a Gaussian kernel function in order to decide the relevance of each training pattern depending on its similarity to the novel sample. The proposed method has been applied to three different domains: an artificial approximation problem and two time series prediction problems. Results have been compared to standard training method using the complete training data set and the new method shows better generalization abilities.

Downloads

Download data is not yet available.

Downloads

Published

2012-01-30

How to Cite

Valls, J. M., Galván, I. M., & Isasi, P. (2012). Improving the Generalization Ability of RBNN Using a Selective Strategy Based on the Gaussian Kernel Function. Computing and Informatics, 25(1), 1–15. Retrieved from http://147.213.75.17/ojs/index.php/cai/article/view/330