Optimization of reward shaping function based on genetic algorithm applied to a cross validated deep deterministic policy gradient in a powered landing guidance problem


Nugroho L., Andiarti R., Akmeliawati R., KUTAY A. T., Larasati D. K., Wijaya S. K.

Engineering Applications of Artificial Intelligence, cilt.120, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 120
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1016/j.engappai.2022.105798
  • Dergi Adı: Engineering Applications of Artificial Intelligence
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Anahtar Kelimeler: Reward shaping function, GA-search, DDPG, Fitness, Reusable launch vehicle, SLIDING GUIDANCE, REINFORCEMENT
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

© 2022 Elsevier LtdOne major capability of a Deep Reinforcement Learning (DRL) agent to control a specific vehicle in an environment without any prior knowledge is decision-making based on a well-designed reward shaping function. An important but little-studied major factor that can alter significantly the training reward score and performance outcomes is the reward shaping function. To maximize the control efficacy of a DRL algorithm, an optimized reward shaping function and a solid hyperparameter combination are essential. In order to achieve optimal control during the powered descent guidance (PDG) landing phase of a reusable launch vehicle, the Deep Deterministic Policy Gradient (DDPG) algorithm is used in this paper to discover the best shape of the reward shaping function (RSF). Although DDPG is quite capable of managing complex environments and producing actions intended for continuous spaces, its state and action performance could still be improved. A reference DDPG agent with the original reward shaping function and a PID controller were placed side by side with the GA-DDPG agent using GA-optimized RSF. The best GA-DDPG individual can maximize overall rewards and minimize state errors with the help of the potential-based GA(PbGA) searched RSF, maintaining the highest fitness score among all individuals after has been cross-validated and retested extensively Monte-Carlo experimental results.