Summary: In this paper, we obtain necessary optimality conditions for neural network approximation. We consider neural networks in Manhattan ($l_1$ norm) and Chebyshev ($\max$ norm). The optimality conditions are based on neural networks with at most one hidden layer. We reformulate nonsmooth unconstrained optimisation problems as larger dimension constrained problems with smooth objective functions and constraints. Then we use KKT conditions to develop the necessary conditions and present the optimality conditions in terms of convex analysis and convex sets.