Resumen
When designed correctly, radial basis function (RBF) neural networks can approximate mathematical functions to any arbitrary degree of precision. Multilayer perceptron (MLP) neural networks are also universal function approximators, but RBF neural networks can often be trained several orders of magnitude more quickly than an MLP network with an equivalent level of function approximation capability. The primary challenge with designing a high-quality RBF neural network is selecting the best values for the network?s ?centers?, which can be thought of as geometric locations within the input space. Traditionally, the locations for the RBF nodes? centers are chosen either through random sampling of the training data or by using k-means clustering. The current paper proposes a new algorithm for selecting the locations of the centers by relying on a structure known as an ?opportunity matrix?. The performance of the proposed algorithm is compared against that of the random sampling and k-means clustering methods using a large set of experiments involving both a real-world dataset from the steel industry and a variety of mathematical and statistical functions. The results indicate that the proposed opportunity matrix algorithm is almost always much better at selecting locations for an RBF network?s centers than either of the two traditional techniques, yielding RBF neural networks with superior function approximation capabilities.