# Tech Report CS-90-01

## Complexity Issues in Learning by Neural Nets

### Jyh-Han Lin and Jeffrey Scott Vitter

#### January 1990 (Revised 11/90)

### Abstract:

We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. We introduce a neural net learning model and classify several neural net design and optimization problems within the polynomial-time hierarchy. We also investigate how much easier the problems become if the class of concepts to be learned is known {\em a priori} and the net architecture is allowed to be sufficiently non-optimal. We show that the problem of finding an optimal perceptron (in terms of the number of non-zero weights) consistent with a set of training examples is {\em NP}-complete. Our main result shows that the training problem for 2-cascade neural nets (which have only one hidden unit) is {\em NP}-complete, which implies that finding an optimal net (in terms of the number of non-input units) to load a set of examples is also {\em NP}-complete. Since the perceptron training problem can be solved in polynomial time by linear programming techniques, this result also demonstrates a surprising gap between the computational complexities of one-node and two-node neural net training problems. We conjecture that training a {\em k}-cascade neural net, which is a classical threshold network training problem, is also {\em NP}-complete, for each fixed $ k \geq 2 $.

(complete text in pdf)