FEATURE LEARNING AND ACTIVE LEARNING FOR IMAGE QUALITY ASSESSMENT
dc.contributor.advisor | Chellappa, Rama | en_US |
dc.contributor.advisor | Doermann, David | en_US |
dc.contributor.author | Ye, Peng | en_US |
dc.contributor.department | Electrical Engineering | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2014-06-24T05:37:28Z | |
dc.date.available | 2014-06-24T05:37:28Z | |
dc.date.issued | 2014 | en_US |
dc.description.abstract | With the increasing popularity of mobile imaging devices, digital images have become an important vehicle for representing and communicating information. Unfortunately, digital images may be degraded at various stages of their life cycle. These degradations may lead to the loss of visual information, resulting in an unsatisfactory experience for human viewers and difficulties for image processing and analysis at subsequent stages. The problem of visual information quality assessment plays an important role in numerous image/video processing and computer vision applications, including image compression, image transmission and image retrieval, etc. There are two divisions of Image Quality Assessment (IQA) research - Objective IQA and Subjective IQA. For objective IQA, the goal is to develop a computational model that can predict the quality of distorted image with respect to human perception or other measures of interest accurately and automatically. For subjective IQA, the goal is to design experiments for acquiring human subjects' opinions on image quality. It is often used to construct image quality datasets and provide the groundtruth for building and evaluating objective quality measures. In the thesis, we will address these two aspects of IQA problem. For objective IQA, our work focuses on the most challenging category of objective IQA tasks - general-purpose No-Reference IQA (NR-IQA), where the goal is to evaluate the quality of digital images without access to reference images and without prior knowledge of the types of distortions. First, we introduce a feature learning framework for NR-IQA. Our method learns discriminative visual features in the spatial domain instead of using hand-craft features. It can therefore significantly reduce the feature computation time compared to previous state-of-the-art approaches while achieving state-of-the-art performance in prediction accuracy. Second, we present an effective method for extending existing NR-IQA mod- els to "Opinion-Free" (OF) models which do not require human opinion scores for training. In particular, we accomplish this by using Full-Reference (FR) IQA measures to train NR-IQA models. Unsupervised rank aggregation is applied to combine different FR measures to generate a synthetic score, which serves as a better "gold standard". Our method significantly outperforms previous OF-NRIQA methods and is comparable to state-of-the-art NR-IQA methods trained on human opinion scores. Unlike objective IQA, subjective IQA tests ask humans to evaluate image quality and are generally considered as the most reliable way to evaluate the visual quality of digital images perceived by the end user. We present a hybrid subjective test which combines Absolute Categorical Rating (ACR) tests and Paired Comparison (PC) tests via a unified probabilistic model and an active sampling method. Our method actively constructs a set of queries consisting of ACR and PC tests based on the expected information gain provided by each test and can effectively reduce the number of tests required for achieving a target accuracy. Our method can be used in conventional laboratory studies as well as crowdsourcing experiments. Experimental results show our method outperforms state-of-the-art subjective IQA tests in a crowdsourced setting. | en_US |
dc.identifier.uri | http://hdl.handle.net/1903/15140 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Electrical engineering | en_US |
dc.subject.pqcontrolled | Computer science | en_US |
dc.subject.pquncontrolled | active learning | en_US |
dc.subject.pquncontrolled | computer vision | en_US |
dc.subject.pquncontrolled | feature learning | en_US |
dc.subject.pquncontrolled | image quality | en_US |
dc.subject.pquncontrolled | machine learning | en_US |
dc.title | FEATURE LEARNING AND ACTIVE LEARNING FOR IMAGE QUALITY ASSESSMENT | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1