The solution of the HJB equation is the 'value function', which gives the optimal cost-to-go for a given dynamical system with an associated cost function. The solution is open loop, but it also permits the solution of the closed loop problem.
The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers.[1] The corresponding discrete-time equation is usually referred to as the Bellman equation. In continuous time, the result can be seen as an extension of earlier work in classical physics on the Hamilton-Jacobi equation by William Rowan Hamilton and Carl Gustav Jacob Jacobi.
Optimal control problems
Consider the following problem in deterministic optimal controlThe system must also be subject to
The partial differential equation
For this simple system, the Hamilton Jacobi Bellman partial differential equation isThe unknown scalar V(x,t) in the above PDE is the Bellman 'value function', which represents the cost incurred from starting in state x at time t and controlling the system optimally from then until time T.
Deriving the equation
Intuitively HJB can be "derived" as follows. If V(x(t),t) is the optimal cost-to-go function (also called the 'value function'), then by Richard Bellman's principle of optimality, going from time t to t + dt, we haveSolving the equation
The HJB equation needs to be solved backwards in time, starting from t = T and ending at t = 0.[citation needed]The HJB equation is a necessary and sufficient condition for an optimum. [2] If we can solve for V then we can find from it a control u that achieves the minimum cost.
In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including viscosity solution (Pierre-Louis Lions and Michael Crandall), minimax solution (Andrei Izmailovich Subbotin), and others.
No comments:
Post a Comment