Markov Decision Models with Weighted Discounted Criteria

Loading...
Thumbnail Image

Files

TR_91-43.pdf (745.62 KB)
No. of downloads: 941

Publication or External Link

Date

1991

Advisor

Citation

DRUM DOI

Abstract

We consider a discrete time Markov Decision Process with infinite horizon. The criterion to be maximized is the sum of a number of standard discounted rewards, each with a different discount factor. Situations in which such criteria arise include modeling investments, modeling projects of different durations and systems with different time-scales, and some axiomatic formulations of multi-attribute preference theory. We show that for this criterion for some positive e there need not exist an e - optimal (randomized) stationary strategy, even when the state and action sets are finite. However, e - optimal Markov (non-randomized) strategies and optimal Markov strategies exist under weak conditions. We exhibit e - optimal Markov strategies which are stationary from some time onward. When both state and action spaces are finite, there exists an optimal Markov strategy with this property. We provide an explicit algorithm for the computation of such strategies.

Notes

Rights