Computer Science Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2756
Browse
2 results
Search Results
Item Resource Allocation in Computer Vision(2013) Chen, Daozheng; Jacobs, David W; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We broadly examine resource allocation in several computer vision problems. We consider human resource or computational resource constraints. Human resources, such as human operators monitoring a camera network, provide reliable information, but are typically limited by the huge amount of data to be processed. Computational resources refer to the resources used by machines, such as running time, to execute the programs. It is important to develop algorithms to make effective use of these resources in computer vision applications. We approach human resource constraints with a frame retrieval problem in a camera network. This work addresses the problem of using active inference to direct human attention in searching a camera network for people that match a query image. We find that by representing the camera network using a graphical model, we can more accurately determine which video frames match the query, and improve our ability to direct human attention. We experiment with different methods to determine from which frames to sample expert information from humans, and discover that a method that learns to predict which frame is misclassified gives the best performance. We approach the problem of allocating computational resource in a video processing task. We consider a video processing application in which we combine the outputs from two algorithms so that the budget-limited computationally more expensive algorithm is run in the most useful video frames to maximize processing performance. We model the video frames as a chain graphical model and extend a dynamic programming algorithm to determine on which frames to run the more expensive algorithm. We perform experiments on moving object detection and face detection to demonstrate the effectiveness of our approaches. Finally, we consider an idea for saving computational resources and maintaining program performance. We work on a problem of learning model complexity in latent variable models. Specifically, we learn the latent variable state space complexity in latent support vector machines using group norm regularization. We apply our method to handwritten digit recognition and object detection with deformable part models. Our approach reduces latent variable state size and performs faster inference with similar or better performance.Item Scalable Statistical Modeling and Query Processing over Large Scale Uncertain Databases(2011) Kanagal Shamanna, Bhargav; Deshpande, Amol; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)The past decade has witnessed a large number of novel applications that generate imprecise, uncertain and incomplete data. Examples include monitoring infrastructures such as RFIDs, sensor networks and web-based applications such as information extraction, data integration, social networking and so on. In my dissertation, I addressed several challenges in managing such data and developed algorithms for efficiently executing queries over large volumes of such data. Specifically, I focused on the following challenges. First, for meaningful analysis of such data, we need the ability to remove noise and infer useful information from uncertain data. To address this challenge, I first developed a declarative system for applying dynamic probabilistic models to databases and data streams. The output of such probabilistic modeling is probabilistic data, i.e., data annotated with probabilities of correctness/existence. Often, the data also exhibits strong correlations. Although there is prior work in managing and querying such probabilistic data using probabilistic databases, those approaches largely assume independence and cannot handle probabilistic data with rich correlation structures. Hence, I built a probabilistic database system that can manage large-scale correlations and developed algorithms for efficient query evaluation. Our system allows users to provide uncertain data as input and to specify arbitrary correlations among the entries in the database. In the back end, we represent correlations as a forest of junction trees, an alternative representation for probabilistic graphical models (PGM). We execute queries over the probabilistic database by transforming them into message passing algorithms (inference) over the junction tree. However, traditional algorithms over junction trees typically require accessing the entire tree, even for small queries. Hence, I developed an index data structure over the junction tree called INDSEP that allows us to circumvent this process and thereby scalably evaluate inference queries, aggregation queries and SQL queries over the probabilistic database. Finally, query evaluation in probabilistic databases typically returns output tuples along with their probability values. However, the existing query evaluation model provides very little intuition to the users: for instance, a user might want to know Why is this tuple in my result? or Why does this output tuple have such high probability? or Which are the most influential input tuples for my query ?'' Hence, I designed a query evaluation model, and a suite of algorithms, that provide users with explanations for query results, and enable users to perform sensitivity analysis to better understand the query results.