IMAGE RETRIEVAL BASED ON COMPLEX DESCRIPTIVE QUERIES
dc.contributor.advisor | DAVIS, LARRY S | en_US |
dc.contributor.author | Siddiquie, Behjat | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2012-02-17T06:42:21Z | |
dc.date.available | 2012-02-17T06:42:21Z | |
dc.date.issued | 2011 | en_US |
dc.description.abstract | The amount of visual data such as images and videos available over web has increased exponentially over the last few years. In order to efficiently organize and exploit these massive collections, a system, apart from being able to answer simple classification based questions such as whether a specific object is present (or absent) in an image, should also be capable of searching images and videos based on more complex descriptive questions. There is also a considerable amount of structure present in the visual world which, if effectively utilized, can help achieve this goal. To this end, we first present an approach for image ranking and retrieval based on queries consisting of multiple semantic attributes. We further show that there are significant correlations present between these attributes and accounting for them can lead to superior performance. Next, we extend this by proposing an image retrieval framework for descriptive queries composed of object categories, semantic attributes and spatial relationships. The proposed framework also includes a unique multi-view hashing technique, which enables query specification in three different modalities - image, sketch and text. We also demonstrate the effectiveness of leveraging contextual information to reduce the supervision requirements for learning object and scene recognition models. We present an active learning framework to simultaneously learn appearance and contextual models for scene understanding. Within this framework we introduce new kinds of labeling questions that are designed to collect appearance as well as contextual information and which mimic the way in which humans actively learn about their environment. Furthermore we explicitly model the contextual interactions between the regions within an image and select the question which leads to the maximum reduction in the combined entropy of all the regions in the image (image entropy). | en_US |
dc.identifier.uri | http://hdl.handle.net/1903/12244 | |
dc.subject.pqcontrolled | Computer science | en_US |
dc.subject.pquncontrolled | Active Learning | en_US |
dc.subject.pquncontrolled | Attributes | en_US |
dc.subject.pquncontrolled | Complex Queries | en_US |
dc.subject.pquncontrolled | Image Retrieval | en_US |
dc.subject.pquncontrolled | Multi-Modal data | en_US |
dc.subject.pquncontrolled | Multi-View Hashing | en_US |
dc.title | IMAGE RETRIEVAL BASED ON COMPLEX DESCRIPTIVE QUERIES | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Siddiquie_umd_0117E_12685.pdf
- Size:
- 10.34 MB
- Format:
- Adobe Portable Document Format