KKT-based optimality conditions for neural network approximation

Abstract

Summary: In this paper, we obtain necessary optimality conditions for neural network approximation. We consider neural networks in Manhattan ($l_1$ norm) and Chebyshev ($\max$ norm). The optimality conditions are based on neural networks with at most one hidden layer. We reformulate nonsmooth unconstrained optimisation problems as larger dimension constrained problems with smooth objective functions and constraints. Then we use KKT conditions to develop the necessary conditions and present the optimality conditions in terms of convex analysis and convex sets.

Julien Ugon
Julien Ugon
Associate Professor in Mathematics

My research interests include nonsmooth analysis, optimisation and generalised convexity.