Dynamic programming for a Markov-switching jump-diffusion

Azevedo N., Pinheiro D., WEBER G. W.

JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, vol.267, pp.1-19, 2014 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 267
  • Publication Date: 2014
  • Doi Number: 10.1016/j.cam.2014.01.021
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.1-19
  • Keywords: Stochastic optimal control, Jump-diffusion, Markov-switching, Optimal consumption-investment, NEURAL-NETWORK, PORTFOLIO, MODELS, SYSTEM
  • Middle East Technical University Affiliated: Yes


We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman's optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Levy process and the Markov process. As an application of our results, we study a finite horizon consumption-investment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities. (C) 2014 Elsevier B.V. All rights reserved.