Multi-time Scale Markov Decision Processes

Loading...
Thumbnail Image

Files

TR_2002-6.pdf (436.65 KB)
No. of downloads: 904

Publication or External Link

Date

2002

Citation

DRUM DOI

Abstract

This paper proposes a simple analytical model called M time-scale MarkovDecision Process (MMDP) for hierarchically structured sequential decision making processes, where decisions in each level in the M-level hierarchy are made in M different time-scales.

In this model, the state space and the control space ofeach level in the hierarchy are non-overlapping with those of the other levels, respectively, and the hierarchy is structured in a "pyramid" sense such that a decision made at level m (slower time-scale) state and/or the state will affect the evolutionary decision making process of the lower level m+1 (faster time-scale) until a new decision is made at the higher level but the lower level decisions themselves do not affect the higher level's transition dynamics. The performance produced by the lower level's decisions will affect the higher level's decisions.

A hierarchical objective function is defined such that the finite-horizon value of following a (nonstationary) policy at the level m+1 over a decision epoch of the level m plus an immediate reward at the level m is the single step reward for the level m decision making process. From this we define "multi-level optimal value function" and derive "multi-level optimality equation."

We discuss how to solve MMDPs exactly or approximately and also study heuristic on-line methods to solve MMDPs. Finally, we give some example control problems that can be modeled as MMDPs.

Notes

Rights