Planning Under Uncertainty: Moving Forward

Loading...
Thumbnail Image

Files

umi-umd-3647.pdf (1.03 MB)
No. of downloads: 829

Publication or External Link

Date

2006-07-24

Citation

DRUM DOI

Abstract

Reasoning about uncertainty is an essential component of many real-world planning problems, such as robotic and space applications, military operations planning, air and ground traffic control, and manufacturing systems. The two predominant approaches for planning under uncertainty are based on Markov Decision Processes (MDPs) and Symbolic Model Checking. Despite the recent advances in these approaches, the problem of how to plan under uncertainty is still very hard: the planning algorithms must reason about more than one possible execution path in the world, and the sizes of the solution plans may grow exponentially.

This dissertation describes a suite of new planning algorithms for planning under uncertainty. The new algorithms are much more efficient than the previous techniques; in some cases, they find solutions exponentially faster than the previous ones. In particular, our contributions are as follows:

(1) A method to take any forward-chaining classical planning algorithm, and systematically generalize it to work for planning in nondeterministic planning domains, where the likelihood of the possible outcomes of the actions are not known.

(2) A way, called "Forward State-Space Splitting (FS3)," to take the search control (i.e., pruning) technique of any forward-chaining classical planner, such as TLPlan [BK00], TALplanner [KD01], and SHOP2 [NAI+03], and combine it with BDDs. The result of this combination is a suite of new planning algorithms for nondeterministic planning domains.

(3) A way to incorporate the pruning technique of a forward-chaining classical planner into the previous algorithms developed for planning with MDPs. The result is a suite of enhanced planning algorithms for MDP planning.

The new planning techniques described here have good potential to be applicable to other research areas as well. In particular, this dissertation describes such potentials in Reinforcement Learning, Hybrid Systems Control, and Planning with Temporal Uncertainty. Finally, the closing remarks include a discussion on the challenges of using search control in planning under uncertainty and some possible ways to address those challenges.

Notes

Rights