Scene Analysis under Variable Illumination using Gradient Domain Methods

Thumbnail Image


umi-umd-3488.pdf (26.11 MB)
No. of downloads: 4200

Publication or External Link






The goal of this research is to develop algorithms for reconstruction and manipulation of gradient fields for scene analysis, from intensity images captured under variable illumination. These methods utilize gradients or differential measurements of intensity and depth for analyzing a scene, such as estimating shape and intrinsic images, and edge suppression under variable illumination. The differential measurements lead to robust reconstruction from gradient fields in the presence of outliers and avoid hard thresholds and smoothness assumptions in manipulating image gradient fields.

Reconstruction from gradient fields is important in several applications including shape extraction using Photometric Stereo and Shape from Shading, image editing and matting, retinex, mesh smoothing and phase unwrapping. In these applications, a non-integrable gradient field is available, which needs to be integrated to obtain the final image or surface. Previous approaches for enforcing integrability have focused on least square solutions which do not work well in the presence of outliers and do not locally confine errors during reconstruction. I present a generalized equation to represent a continuum of surface reconstructions of a given non-integrable gradient field. This equation is used to derive new types of feature preserving surface reconstructions in the presence of noise and outliers. The range of solutions is related to the degree of anisotropy of the weights applied to the gradients in the integration process.

Traditionally, image gradient fields have been manipulated using hard thresholds for recovering reflectance/illumination maps or to remove illumination effects such as shadows. Smoothness of reflectance/illumination maps is often assumed in such scenarios. By analyzing the direction of intensity gradient vectors in images captured under different illumination conditions, I present a framework for edge suppression which avoids hard thresholds and smoothness assumptions. This framework can be used to manipulate image gradient fields to synthesize computationally useful and visually pleasing images, and is based on two approaches: (a) gradient projection and (b) affine transformation of gradient fields using cross-projection tensors. These approaches are demonstrated in the context of several applications such as removing shadows and glass reflections, and recovering reflectance/illumination maps and foreground layers under varying illumination.