Digital Repository at the University of Maryland (DRUM) >
Theses and Dissertations from UMD >
UMD Theses and Dissertations >
Please use this identifier to cite or link to this item:
|Title: ||Adaptive Algorithms for Automated Processing of Document Images|
|Authors: ||Agrawal, Mudit|
|Advisors: ||Davis, Larry|
|Department/Program: ||Computer Science|
|Sponsors: ||Digital Repository at the University of Maryland|
University of Maryland (College Park, Md.)
|Subjects: ||Computer science|
|Keywords: ||document image processing|
optical character recognition
|Issue Date: ||2011|
|Abstract: ||Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections.
We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures.
Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features.
Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts.|
|Appears in Collections:||UMD Theses and Dissertations|
Computer Science Theses and Dissertations
All items in DRUM are protected by copyright, with all rights reserved.