JOTA: Convex optimization approach to dynamic programming in continuous state and action spaces

September 3, 2020

The paper “A convex optimization approach to dynamic programming in continuous state and action spaces,” authored by Insoon Yang, has been accepted for publication in the Journal of Optimization Theory and Applications (JOTA). This paper proposes a convex optimization-based approximate DP method, which can handle high-dimensional action spaces, and establishes its theoretical properties including uniform convergence. Throughout several numerical experiments, we also demonstrated the performance and utility of our method.

Abstract: In this paper, a convex optimization-based method is proposed for numerically solving dynamic programs in continuous state and action spaces. The key idea is to approximate the output of the Bellman operator at a particular state by the optimal value of a convex program. The approximate Bellman operator has a computational advantage because it involves a convex optimization problem in the case of control-affine systems and convex costs. Using this feature, we propose a simple dynamic programming algorithm to evaluate the approximate value function at pre-specified grid points by solving convex optimization problems in each iteration. We show that the proposed method approximates the optimal value function with a uniform convergence property in the case of convex optimal value functions. We also propose an interpolation-free design method for a control policy, of which performance converges uniformly to the optimum as the grid resolution becomes finer. When a nonlinear control-affine system is considered, the convex optimization approach provides an approximate policy with a provable suboptimality bound. For general cases, the proposed convex formulation of dynamic programming operators can be modified as a nonconvex bi-level program, in which the inner problem is a linear program, without losing uniform convergence properties.

Leave a Reply