Derivative free algorithms for large scale non-smooth optimization and their applications


Tezin Türü: Doktora

Tezin Yürütüldüğü Kurum: Orta Doğu Teknik Üniversitesi, Uygulamalı Matematik Enstitüsü, Türkiye

Tezin Onay Tarihi: 2013

Öğrenci: ALİ HAKAN TOR

Danışman: BÜLENT KARASÖZEN

Özet:

In this thesis, various numerical methods are developed to solve nonsmooth and in particular, nonconvex optimization problems. More specifically, three numerical algorithms are developed for solving nonsmooth convex optimization problems and one algorithm is proposed to solve nonsmooth nonconvex optimization problems. In general, main differences between algorithms of smooth optimization are in the calculation of search directions, line searches for finding step-sizes and stopping criteria. However, in nonsmooth optimiza- tion there is one additional difference between algorithms. These al gorithms may use different gener- alizations of the gradient. In order to develop algorithms for solving nonsmooth convex optimization problems we use the concept of codifferential. Although there exists the odifferential calculus, the calculation of the whole codifferential is not an easy task. Therefore, in the first numerical method, only a few elements of the codifferential are used to calculate search directions. In order to reduce the number of codifferential evaluations, in the second method elements of the codifferential calculated in previous iterations are used to calculate search directions. In both the first and second methods the problem of calculation of search directions is reduced to the solution of a certain quadratic programming problem. The size of this problem can increase significantly as the number of variables increases. In order to avoid this problem in the third method, called the aggregate codifferential method, the number of elements of the codifferential used to find search directions is fixed. Such an approach allows one to significantly reduce the complexity of codifferential methods and to make them applicable for solving large scale problems of nonsmooth optimization. These methods are applied to some well-known nonsmooth optimization test problems, such as, min-max and general type nonsmooth optimization problems. The obtained numerical results are visualized using performance profiles. In addition, the validation of these methods is made by comparing them with the subgradient and bundle methods using results of numerical experiments. The convergence of methods is analyzed. Finally, the first method is extended to minimize nonsmooth convex functions subject to linear inequalities using slack variables. The notion of quasisecant is used to design an algorithm for solving nonsmooth nonconvex unconstrained optimization problems. In this method, to find descent direction the subgradient algorithm is applied for the solution of a set of linear inequalities. The convergence of the proposed method is analyzed, and the numerical experiments are carried out using general type nonsmooth optimization test problems. To validate this method, the results are compared with those by the subgradient method.