Tech Report CS-97-14
Exploiting Structure for Planning and Control
Discrete dynamical systems in the form of finite automata or Markov decision processes have been used as a representational and computational foundation for planning under uncertainty. Many AI planning problems can be conveniently viewed as control problems over the underlying discrete dynamical systems. Using AI-style representation, the features of application domains are represented as state variables, and planning problem instances compactly encode very large discrete dynamical systems. The standard algorithms to solve the corresponding control problems require explicit enumeration of the underlying state spaces. This is impractical since the sizes of the state spaces are exponential in the number of state variables.
In this thesis, we develop decomposition techniques to exploit structure for planning problems in different application domains. Given a problem instance, we first symbolically decompose the underlying dynamical system into subsystems. After analyzing the local dynamics within subsystems and the dependency across subsystems, we decompose the original planning problem into a sequence of subproblems governing the subsystems. Rather than explicitly constructing the entire system, we compactly construct each subsystem by abstracting away the state variables that are irrelevant to the subsystem and its interactions with the other subsystems. Solutions of the planning subproblems can be derived by applying standard algorithms for the corresponding control problems to these compactly constructed subsystems, while a solution to the original planning problem is derived by systematically compiling and propagating the solutions to the planning subproblems. These decomposition techniques shed light on how to exploit locality and dependency to cope with very large state spaces.