ON SYMMETRY: A FRAMEWORK FOR AUTOMATED SYMMETRY DETECTION

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2013

Citation

DRUM DOI

Abstract

Symmetry has weaved itself into almost all fabrics of science, as well as in arts, and has left an indelible imprint on our everyday lives. And, in the same manner, it has pervaded a wide range of areas of computer science, especially computer vision area, and a copious amount of literature has been produced to seek an algorithmic way to identify symmetry in digital data. Notwithstanding decades of endeavor and attempt to have an efficient system that can locate and recover symmetry embedded in real-world images, it is still challenging to fully automate such tasks while maintaining a high level of efficiency. The subject of this thesis is symmetry of imaged objects. Symmetry is one of the non-accidental features of shapes and has long been (maybe mistakenly) speculated as a pre-attentive feature, which improves recognition of quickly presented objects and reconstruction of shapes from incomplete set of measurements. While symmetry is known to provide rich and useful geometric cues to computer vision, it has been barely used as a principal feature for applications because figuring out how to represent and recognize symmetries embedded in objects is a singularly difficult task, both for computer vision and for perceptual psychology. The three main problems addressed in the dissertation are: (i) finding approximate symmetry by identifying the most prominent axis of symmetry out of an entire region, (ii) locating bilaterally symmetrical areas from a scene, and (iii) automating the process of symmetry recovery by solving the problems mentioned above. Perfect symmetries are rare in the extreme in natural images and symmetry perception in humans allows for qualification so that symmetry can be graduated

based on the degree of structural deformation or replacement error. There have been many approaches to detect approximate symmetry by searching an optimal solution in a form of an exhaustive exploration of the parameter space or surmising the center of mass. The algorithm set out in this thesis circumvents the computationally

intensive operations by using geometric constraints of symmetric images, and assumes no prerequisite knowledge of the barycenter. The results from an extensive set of evaluation experiments on metrics for symmetry distance and a comparison of the performance between the method presented in this thesis and the state of the art approach are demonstrated as well. Many biological vision systems employ a special computational strategy to locate regions of interest based on local image cues while viewing a compound visual

scene. The method taken in this thesis is a bottom-up approach that causes the observer favors stimuli based on their saliency, and creates a feature map contingent on symmetry. With the help of summed area tables, the time complexity of the proposed algorithm is linear in the size of the image. The distinguished regions are

then delivered to the algorithm described above to uncover approximate symmetry.

Notes

Rights