Tezin Türü: Doktora
Tezin Yürütüldüğü Kurum: Orta Doğu Teknik Üniversitesi, Uygulamalı Matematik Enstitüsü, Türkiye
Tezin Onay Tarihi: 2010
Öğrenci: AYSUN TEZEL ÖZTURAN
Danışman: BÜLENT KARASÖZEN
Özet:Semi-infinite programming problems is a class of optimization problems in finite dimensional variables which are subject to infinitely many inequality constraints. If the infinite index of inequality constraints depends on the decision variable, then the problem is called generalized semi-infinite programming problem (GSIP). If the infinite index set is fixed, then the problem is called standard semi-infinite programming problem (SIP). In this thesis, convergence of a semismooth Newton method for generalized semi-infinite programming problems with convex lower level problems is investigated. In this method, using nonlinear complementarity problem functions the upper and lower level Karush-Kuhn-Tucker conditions of the optimization problem are reformulated as a semismooth system of equations. A possible violation of strict complementary slackness causes nonsmoothness. In this study, we show that the standard regularity condition for convergence of the semismooth Newton method is satisfied under natural assumptions for semi-infinite programs. In fact, under the Reduction Ansatz in the lower level problem and strong stability in the reduced upper level problem this regularity condition is satisfied. In particular, we do not have to assume strict complementary slackness in the upper level. Furthermore, in this thesis we neither assume strict complementary slackness in the upper nor in the lower level. In the case of violation of strict complementary slackness in the lower level, the auxiliary functions of the locally reduced problem are not necessarily twice continuously differentiable. But still, we can show that a standard regularity condition for quadratic convergence of the semismooth Newton method holds under a natural assumption for semi-infinite programs. Numerical examples from, among others, design centering and robust optimization illustrate the performance of the method.