It is one of the fundamental and challenging problems to determine the node numbers of hidden layers in neural networks. Various efforts have been made to study the relations between the approximation ability and the number of hidden nodes of some specific neural networks, such as single-hidden-layer and two-hidden-layer feedforward neural networks with specific or conditional activation functions. However, for arbitrary feedforward neural networks, there are few theoretical results on such issues. This paper gives an upper bound on the node number of each hidden layer for the most general feedforward neural networks called multilayer perceptrons (MLP), from an algebraic point of view. First, we put forward the
method of expansion linear spaces to
investigate the algebraic structure and properties of the outputs of MLPs. Then it is proved that given $k$ distinct training samples, for any MLP with $k$ nodes in each hidden layer, if a certain optimization problem has solutions, the approximation error keeps invariant with adding nodes
to hidden layers. Furthermore, it is shown that for any MLP whose activation function for the output layer is bounded on
$\mathbb{R}$, at most
$k$ hidden nodes in each hidden layer are needed to learn $k$ training samples.