Sensor, Motion and Temporal Planning

Thumbnail Image


umi-umd-3542.pdf (6.27 MB)
No. of downloads: 1196

Publication or External Link






We describe in this dissertation, planning strategies which enhance the accuracy with which visual surveillance can be conducted and which expand the capabilities of visual surveillance systems. Several classes of planning strategies are considered: sensor planning, motion planning and temporal planning. Sensor planning is the study of the control of cameras to optimize information gathering for performing vision algorithms. The study of camera control spans camera placement strategies, active camera (specifically, Pan-Tilt-Zoom or PTZ cameras) control, and, in some cases, camera selection from a collection of static cameras. Camera placement strategies have been employed previously for enhancing vision algorithms such as 3D reconstruction, area coverage in surveillance, occlusion and visibility analysis, etc. We will introduce a two-camera placement strategy that is utilized by a background subtraction algorithm, allowing it to achieve video rate performance and invariance to several illumination artifacts, such as lighting changes and shadows. While camera placement strategies can improve the performance of vision algorithms significantly, their utilities are limited in situations where it is more cost-effective to utilize existing camera networks instead. In these situations, we can employ camera selection strategies that choose, from the camera network, cameras that yield the best performance when utilized for performing surveillance tasks. We illustrate this with an algorithm that detects and tracks people under severe occlusions by selecting the best stereo pairs for counting people in a scene. The study of sensor planning is also closely related to motion and temporal planning. Motion and temporal planning involves predicting trajectories of objects into the future based on previously observed tracks, and is very useful for modeling interactions between moving objects in the scene. This is utilized by an active camera system that we have developed for reasoning about periods of occlusions. Doing so allows the system to select cameras and PTZ settings that with high probability can be used to capture unobstructed video segments. Finally, we will introduce a left-package system. This system first detects abandoned package in the scene and goes back in time to determine the time window when the package was first left. Steps can then be taken to retrieve images or video segments collected during the time window for identifying the person who left the package. We present the left-package detection sub-system and will show that it can detect abandoned packages even under severe occlusions without any hard thresholding steps.