Computer Science Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Computer Science Theses and Dissertations by Issue Date
Now showing 1 - 20 of 1000
Results Per Page
Sort Options
Item On Numerical Analysis in Residue Number Systems(1964) Lindamood, George Edward; Rheinboldt, Werner C.; Computer Science Center; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Recent attempts to utilize residue number systems in digital computers have raised numerous questions about adapting the techniques of numerical analysis to residue number systems. Among these questions are the fundamental problems of how to compare the magnitudes of two numbers, how to detect additive and multiplicative overflow, and how to divide in residue number systems. These three problems are treated in separate chapters of this thesis and methods are developed therein whereby magnitude comparison, overflow detection, and division can be performed in residue number systems. In an additional chapter, the division method is extended to provide an algorithm for the direct approximation of square roots in residue number systems. Numerous examples are provided illustrating the nature of the problems considered and showing the use of the solutions presented in practical computations. In a final chapter are presented the results of extensive trial calculations for which a conventional digital computer was programmed to simulate the use of the division and square root algorithms in approximating quotients and square roots in residue number systems. These results indicate that, in practice, these division and square root algorithms usually converge to the quotient or square root somewhat faster than is suggested by the theory.Item Restructuring Textual Information for Online Retrieval(1985) Koved, Lawrence; Shneiderman, Ben; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Two experiments were conducted to evaluate two styles of online documents. The first experiment compared paper manuals to online manuals using two different database structuring techniques - a sequential (linear) structure and a tree structure. People using the paper manuals were faster at solving problems than the people using the computer manuals. No differences were found between the linear and tree structures, or in accuracy of problem solutions. In a subjective evaluation of user preferences, the computer manuals were rated as better and more organized than the paper manuals. The second experiment compared two methods of retrieving online information that allowed the reader to specify the attributes needed to guide the information retrieval process. The first manual recorded the attributes entered by the reader via menus, and material in the manuals not relevant to the current search was pruned from the search space. The second manual did not record the menu selections, and the readers repeatedly entered the attributes several times in order to complete the task. The manual that recorded the attributes allowed the readers to work over twice as fast and was pref erred over the other manual. A theoretical foundation is presented for the underlying online documentation used in the experiments. The user's traversal through the database is presented as a graph search process, using a production system. The results of the experiments and their theoretical foundations are evaluated in terms of the impact they might have on future online document storage and retrieval systems.Item Treemaps: Visualizing Hierarchical and Categorical Data(1993) Johnson, Brian Scott; Shneiderman, BenTreemaps are a graphical method for the visualization of hierarchical and categorical data sets. Treemap presentations of data shift mental workload from the cognitive to the perceptual systems, taking advantage of the human visual processing system to increase the bandwidth of the human-computer interface. Efficient use of display space allows for the simultaneous presentation of thousands of data records, as well as facilitating the presentation of semantic information. Treemaps let users see the forest and the trees by providing local detail in the context of a global overview, providing a visually engaging environment in which to analyze, search, explore and manipulate large data sets. The treemap method of hierarchical visualization, at its core, is based on the property of containment. This property of containment is a fundamental idea which powerfully encapsulates many of our reasons for constructing information hierarchies. All members of the treemap family of algorithms partition multi-dimensional display spaces based on weighted hierarchical data sets. In addition to generating treemaps and standard traditional hierarchical diagrams, the treemap algorithms extend non-hierarchical techniques such as bar and pie charts into the domain of hierarchical presentation. Treemap algorithms can be used to generate bar charts, outlines, traditional 2-D node and link diagrams, pie charts, cone trees, cam trees, drum trees, etc. Generating existing diagrams via treemap transformations is an exercise meant to show the power, ease, and generality with which alternative presentations can be generated from the basic treemap algorithms. Two controlled experiments with novice treemap users and real data highlight the strengths of treemaps. The first experiment with 12 subjects compares the Macintosh TreeVizTM implementation of treemaps with the UNIX command line for questions dealing with a 530 node file hierarchy. Treemaps are shown to significantly reduce user performance times for global file comparison tasks. A second experiment with 40 subjects compares treemaps with dynamic outlines for questions dealing with the allocation funds in the 1992 US Budget (357 node budget hierarchy). Treemap users are 50% faster overall and as much as 8 times faster for specific questions.Item Adaptive Database Systems Based On Query Feedback and Cached Results(1994) Chen, Chung-Min; Roussopoulos, Nick; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)This dissertation explores the query optimization technique of using cached results and feedback for improving performance of database systems. Cached results and experience obtained by running queries are used to save execution time for follow–up queries, adapt data and system parameters, and improve overall system performance. First, we develop a framework which integrates query optimization and cache management. The optimizer is capable of generating efficient query plans using previous query results cached on the disk. Alternative methods to access and update the caches are considered by the optimizer based on cost estimation. Different cache management strategies are also included in this framework for comparison. Empirical performance study verifies the advantage and practicality of this framework. To help the optimizer in selecting the best plan, we propose a novel approach for providing accurate but cost-effective selectivity estimation. Distribution of attribute values is regressed in real time, using actual query result sizes obtained as feedback, to make accurate selectivity estimation. This method avoids the expensive off-line database access overhead required by the conventional methods and adapts fairly well to updates and query locality. This is verified empirically. To execute a query plan more efficiently, a buffer pool is usually provided for caching data pages in memory to reduce disk accesses. We enhance buffer utilization by devising a buffer allocation scheme for recurring queries using page fault feedback obtained from previous executions. Performance improvement of this scheme is shown by empirical examples and a systematic simulation.Item Zoomable User Interfaces for the Authoring and Delivery of Slide Presentations(2003-10-27) Good, Lance Everett; Bederson, Benjamin B; Stefik, Mark J; Computer ScienceMillions of slide presentations are being authored and delivered with computer software every day. Yet much of the computer's power for these tasks remains untapped. Existing interaction techniques leave presenters wrestling with limited size computer displays to get meaningful overviews of their work. Without these overviews, they have trouble finding patterns in their data and experimenting with alternate organizations. They also have difficulty communicating the structure of large or complex talks to the audience and keeping the audience oriented during unexpected transitions between ideas. A natural solution is Zoomable User Interfaces (ZUIs) since they offer the capability to view information at multiple levels of detail and smoothly transition between ideas. This work presents two ZUIs, Niagara and CounterPoint, for authoring and delivering slide presentations. Niagara is a ZUI workspace for authoring presentation content with techniques to improve authoring in the zoomable environment. Empirical evaluations of ZUI-based authoring tools revealed performance improvements and subjective preferences over folder-based interfaces for organization tasks. Users were 30% faster with ZUIs than with folders in completing a simplified shape organization task. Some classes of users were also faster with ZUIs than with folders in completing a text-based organization task. Users performing both tasks exhibited a strong preference for ZUIs over folders. CounterPoint provides a number of features to simplify the creation and delivery of ZUI presentations. The effects of these presentations on the audience were evaluated in a controlled comparison of presentations with slides only, slides with spatial layouts, and slides with spatial layouts and animation. The study revealed a strong subjective preference and higher ratings of organization for presentations with spatial layout. Feedback was also gathered from presenters who used CounterPoint to deliver over 100 real-world presentations. They indicated that CounterPoint helped them communicate overviews and multi-level presentation structures. More experienced CounterPoint presenters also found that CounterPoint helped them keep the audience oriented when navigating the presentation in response to audience feedback.Item Improving Data Delivery in Wide Area and Mobile Environments(2003-11-07) Bright, Laura; Raschid, Louiqa; Bhattacharjee, Bobby; Computer ScienceThe popularity of the Internet has dramatically increased the diversity of clients and applications that access data across wide area networks and mobile environments. Data delivery in these environments presents several challenges. First, applications often have diverse requirements with respect to the latency of their requests and recency of data. Traditional data delivery architectures do not provide interfaces to express these requirements. Second, it is difficult to accurately estimate when objects are updated. Existing solutions either require servers to notify clients (push-based), which adds overhead at servers and may not scale, or require clients to contact servers (pull-based), which rely on estimates that are often inaccurate in practice. Third, cache managers need a flexible and scalable way to determine if an object in the cache meets a client's latency and recency preferences. Finally, mobile clients who access data on wireless networks share limited wireless bandwidth and typically have different QoS requirements for different applications. In this dissertation we address these challenges using two complementary techniques, client profiles and server cooperation. Client profiles are a set of parameters that enable clients to communicate application-specific latency and recency preferences to caches and wireless base stations. Profiles are used by cache managers to determine whether to deliver a cached object to the client or to validate the object at a remote server, and for scheduling data delivery to mobile clients. Server cooperation enables servers to provide resource information to cache managers, which enables cache managers to estimate the recency of cached objects. The main contributions of this dissertation are as follows: First, we present a flexible and scalable architecture to support client profiles that is straightforward to implement at a cache. wireless base station. Second, we present techniques to improve estimates of the recency of cached objects using server cooperation by increasing the amount of information servers provide to caches. Third, for mobile clients, we present a framework for incorporating profiles into the cache utilization, downloading, and scheduling decisions at a We evaluate client profiles and server cooperation using synthetic and trace data. Finally, we present an implementation of profiles and experimental results.Item Explaining the emergence of cooperative traits: An axiomatic theory of accumulation(2003-11-25) Perlitz, Michael; Swistak, Piotr T; Boyle, Mike M; Levermore, Charles D; Stricklin, William R; Applied Mathematics and Scientific ComputationIn this dissertation I construct an axiomatic theory of action that explains how originally selfish individuals form aggregations and develop cooperative abilities. This theory is more general than the two most widespread biological explanations of the emergence of cooperation: kinship theory and game theoretic models. In particular, it introduces the notions of space and time that are more general (individual specific) than standard physical notions on which biological theories mostly rely. While predictions of my theory agree, in principle, with predictions of other main theories of aggregation, its scope goes well beyond that of any other theory of aggregation. For instance, I am able to show that two different arguments about properties of optimal size aggregations do in fact follow from a single set of assumptions, namely those of my theory of accumulations. I am also able to explain a paradoxical empirical finding on genetic variation. More specifically, Sibly (1983) has shown that under a certain type of a fitness function individuals will form aggregations with fitness optimizing group size being larger than the eventually emerging in equilibrium group size. In a response to Sibly, Giraldeau and Gillis (1984) presented a type of fitness function where the optimal group size is equal to the equilibrium group size. Both arguments rely, however, on fitness functions that are postulated ad hoc. In my theory I show how both of these functions can be derived analytically from a set of more fundamental assumptions. This shows that claims of Sibly and Giraldeau's and Gillis' while seemingly contradictory, were in fact consistent. Another example of an application of my theory concerns a genetic puzzle posed by Hedrick and Parker (1997). Hedric and Parker have observed that genetic variation in eusocials is not only a higher than predicted by kinship but even higher than in solitaries. This empirical observation, paradoxical in the light of standard biological explanations, can in fact be explained by my theory.Item FAST TRANSFORMS BASED ON STRUCTURED MATRICES WITH APPLICATIONS TO THE FAST MULTIPOLE METHOD(2003-12-01) Tang, Zhihui; Elman, Howard C.; Duraiswami, Ramani; Applied Mathematics and Scientific ComputationThe solution of many problems in engineering and science is enabled by the availability of a fast algorithm, a significant example being the fast Fourier transform, which computes the matrix-vector product for a $N \times N$ Fourier matrix in $O(N\log(N))$ time. Related fast algorithms have been devised since to evaluate matrix-vector products for other structured matrices such as matrices with Toeplitz, Hankel, Vandermonde, etc. structure. A recent fast algorithm that was developed is the fast multipole method (FMM). The original FMM evaluates all pair-wise interactions in large ensembles of $N$ particles in $O(p^2N)$ time, where $p$ is the number of terms in the truncated multipole/local expansions it uses. Analytical properties of translation operators that shift the center of a multipole or local expansion to another location and convert a multipole expansion into a local expansion are used. The original translation operators achieve the translation in $O(p^2)$ operations for a $p$ term expansion. Translation operations are among the most important and expensive steps in an FMM algorithm. The main focus of this dissertation is on developing fast accurate algorithms for the translation operators in the FMM for Coulombic potentials in two or three dimensions. We show that the matrices involved in the translation operators of the FMM for Coulombic potentials can be expressed as products of structured matrices. Some of these matrices have fast transform algorithms available, while for others we show how they can be constructed. A particular algorithm we develop is for fast computation of matrix vector products of the form $Px$, $P'x$, and $PP'x$, where $P$ is a Pascal matrix. When considering fast translation algorithms for the 3D FMM we decompose translations into an axial translation and a pair of rotations. We show how a fast axial translation can be performed. The bottleneck for achieving fast translation is presented by the lack of a fast rotation transform. A fast rotation algorithm is also important for many other applications, including quantum mechanics, geoscience, computer vision, etc, and fast rotation algorithms are being developed based on the properties of spherical harmonics. We follow an alternate path by showing that the rotation matrix $R$ can be factored in two different ways into the product of structured matrices. Both factorizations allow a fast matrix-vector product. Our algorithm efficiently computes the coefficients of spherical harmonic expansions on rotation. Numerical experiments confirm that the new $O(p\log p)$ translation operators for both the 2D and 3D FMM have the same accuracy as the original ones, achieve their asymptotic complexity for modest $p$, and significantly speed up the FMM algorithms in 2D and 3D. We hope that this thesis will also lead to promising future research in establishing fast translation for the FMM for other potentials, as well as applying the transforms in other applications such as in harmonic analysis on the sphere.Item On the Implementation of an Accurate and Efficient Solver for Convection-Diffusion Equations(2003-12-03) Wu, Chin-Tien; Elman, Howard C.; Applied Mathematics and Scientific ComputationIn this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addiiton, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.Item Replication Techniques for Peer-to-Peer Networks(2003-12-05) Silaghi, Bujor; Keleher, Peter J; Computer ScienceA close examination of presently deployed peer-to-peer networks (P2P) and existing proposals reveals several trends regarding data management. First, only a small percentage of the queried data is actually retrieved by users. Second, the vast majority of data exported by participants in P2P networks is unaltered after creation. Third, membership changes in P2P networks (users/sites leaving or joining the network) occur on a scale not seen before in distributed systems. We address the challenges posed by the above observations using two data replication techniques. First, we define a routing protocol and replication model for hierarchical namespaces that adaptively replicates routing information in response to fluctuating load incurred by the lookup and search procedures. Second, we define a replica control protocol (d-spaces) based on quorum consensus in multi-dimensional spaces that is well suited for scenarios where reads are the dominant type of access to data. We extend the basic d-space protocol with a quorum reconfiguration mechanism which uses local connectivity and cached quorum groups instead of global views to achieve seemingly contradicting goals: improved fault tolerance through local reconfiguration policies, and good global latencies. Our main contributions are as follows: We show that hierarchical bottlenecks can be eliminated, with the resulting system behaving like a true symmetrical P2P network. Temporal locality in the stream of requests (hot-spots) is managed by replicating on demand popular portions of the namespace. We argue that non-virtualized object namespaces maintain desirable properties that are lost through virtualization. We present a token-based method of resolving complex search queries through hierarchical query decomposition and result recomposition. The method allows the initiator to ascertain query completion, and offers a broad range of trade-offs between network bandwidth consumption and the number of connections required to complete the search. We show that d-space quorums have optimal communication complexity. We define the Read-Few Write-Many replica control protocol and argue that the quality of trade-off between read efficiency and update availability is not matched by existing replica control protocols. Finally, we present a quorum reconfiguration mechanism that allows membership changes without requiring global reconfiguration phases.Item MANAGING AND EXPLORING MEDIA USING SEMANTIC REGIONS: A SPATIAL INTERFACE SUPPORTING USER-DEFINED MENTAL MODELS(2003-12-05) Kang, Hyunmo; Shneiderman, Ben; Computer ScienceComputer users deal with large numbers of personal media objects such as images, audio clips, voice mails, video clips, web pages, emails, and various document files. Users often struggle to interpret, explore, arrange, and use personal media objects because of three major problems; an ever increasing amount of personal media data, rigid organizing metaphor, and difficulty in rapid data access. With the progress of computer hardware and digitization technologies at the rate of Moore's law, users may generate, acquire, and store more and more personal media data on their personal machines. However, the means available for users to organize and customize their personal media information spaces are extremely poor and driven mostly by storage and distribution models, not users' needs. Furthermore, this rigid and system-oriented metaphor causes wide and deep file folder hierarchies and often forbids users to rapid data access from storage. This dissertation uses a graphical mechanism for spatially organizing personal media data based on users' mental models and introduces an innovative interaction metaphor to apply users' mental models to personal media data. Semantic Regions introduces an innovative way to construct users' mental models by drawing regions on 2D space and specifying the semantics for each region so that users can apply various personal ontologies to personal media data using the fling-and-flock metaphor. The prototype application, MediaFinder, validates the usability of the interface, particularly in comparison with alternative approaches. Contributions of this dissertation include: - Semantic Regions provides a formal model of spatial and dynamic reorganization of personal media data based on users' mental models. It extends the concept from a system-oriented file management system to a user-oriented personal media management system by employing the semantics of personal media data. - The MediaFinder personal media management tool, which uses Semantic Regions, the fling-and-flock metaphor, and flexible categorization capabilities to explore and manage personal media data sets. MediaFinder's object-oriented architecture can easily be extended to support variants of the Semantic Regions model. - The design and implementation of interaction models that support the personal media management tasks such as organization, meaning extraction, navigation, search, indexing, and navigation. - Two usability studies provided preliminary insight into the utility of Semantic Regions and led to design improvements for the construction and operation of Semantic Regions. - A framework for extending the Semantic Regions model, including the descriptions of possible extensions.Item Optimizing the Execution of Batches of Data Analysis Queries(2004-02-05) Aryangat, Suresh; Sussman, Alan; Computer ScienceData analysis applications such as Kronos, a remote sensing application, and the Virtual Microscope, a telepathology application, require operating on and processing large datasets. In such situations, it is important that the storage, retrieval, and manipulation of these datasets is efficiently handled. Past research focused on the creation of database systems that abstract the data analysis process into a framework facilitating the design of algorithms to optimize the execution of scientific queries and batches of queries. These optimizations occur at different levels in the query processing chain in the database system. The present research deals with the optimizations performed by the database system when processing batches of queries. Various algorithms to optimize the memory utilization of multiple data analysis queries are presented and the effect of each on query processing performance as well as their performance are investigated.Item Inducing Semantic Frames from Lexical Resources(2004-02-17) Green, Rebecca Joyce; Dorr, Bonnie J; Computer ScienceThe multiple ways in which propositional content can be expressed is often referred to as the paraphrase problem. This phenomenon creates challenges for such applications as information retrieval, information extraction, text summarization, and machine translation: Natural language understanding needs to recognize what remains constant across paraphrases, while natural language generation needs the ability to express content in various ways. Frame semantics is a theory of language understanding that addresses the paraphrase problem by providing slot-and-filler templates to represent frequently occurring, structured experiences. This dissertation introduces SemFrame, a system that induces semantic frames automatically from lexical resources (WordNet and the Longman Dictionary of Contemporary English [LDOCE]). Prior to SemFrame, semantic frames had been developed only by hand. In SemFrame, frames are first identified by enumerating groups of verb senses that evoke a common frame. This is done by combining evidence about pairs of semantically related verbs, based on LDOCE's subject field codes, words used in LDOCE definitions and WordNet glosses, WordNet's array of semantic relationships, etc. Pairs are gathered into larger groupings, deemed to correspond to semantic frames. Nouns associated with the verbs evoking a frame are then analyzed against WordNet's semantic network to identify nodes corresponding to frame slots. SemFrame is evaluated in two ways: (1) Compared against the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs' semantic relatedness to other frame-evoking verbs). A WordNet-hierarchy-based lower bound achieves 52.8% recall and 46.6% precision. (2) A frame-semantic-enhanced version of Hearst's TextTiling algorithm, applied to detecting boundaries between consecutive documents, improves upon the non-enhanced TextTiling algorithm at statistically significant levels. (Previous enhancement of the text segmentation algorithm with thesaural relationships had degraded performance.)Item HORUS: A WLAN-BASED INDOOR LOCATION DETERMINATION SYSTEM(2004-04-08) Abdel Azim Yousief Abdel Rehim, Moustafa Amin; Agrawala, Ashok K; Computer ScienceAs ubiquitous computing becomes more popular, the need for context-aware applications increases. The context of an application refers to the information that is part of its operating environment. Typically this includes information such as location, activity of people, and the state of other devices. Algorithms and techniques that allow an application to be aware of the location of a device on a map of the environment are a prerequisite for many of these applications. Many systems over the years have tackled the problem of determining and tracking user position. Examples include GPS, wide-area cellular-based systems, infraredbased systems, magnetic tracking systems, various computer vision systems, physical contact systems, and radio frequency (RF) based systems. Of these, the class of RF-based systems that use an underlying wireless data network, such as the IEEE 802.11 wireless network, to estimate user location has gained attention recently, especially for indoor applications. RF-based techniques provide more ubiquitous coverage than other indoor location determination systems and do not require additional hardware for user location determination, thereby enhancing the value of the wireless data network. However, using a wireless network for location determination has the challenge of dealing with the noisy characteristics of the wireless channel. Current location determination techniques for the 802.11 wireless networks suffer from these noisy characteristics, leading to coarse grained accuracy. A key feature of current techniques is the dependence on building a radio map for the area of interest and using this radio map to infer the user location. Using the radio map to infer the user location is a computationally intensive process and may consume the scarce energy resource of the mobile units. The Horus system is concerned with developing accurate methods for determining the user location with low computation requirements. Our goal is to build a location determination system that is capable of determining the user position with high accuracy, is light enough to be implemented on energy-constrained devices such as handheld computers, and is scalable to track a large number of users and to be used with large areas. We identify different causes of the wireless channel variations and we develop techniques to handle these variations. The results show that the Horus system is able to achieve accuracy significantly better than the current WLAN location determination systems. Moreover, the number of operations required to run the algorithm is better than the current systems with more than an order of magnitude.Item View-Invariance in Visual Human Motion Analysis(2004-04-29) Parameswaran, Vasudev; Chellappa, Rama; Computer ScienceThis thesis makes contributions towards the solutions to two problems in the area of visual human motion analysis: human action recognition and human body pose estimation. Although there has been a substantial amount of research addressing these two problems in the past, the important issue of viewpoint invariance in the representation and recognition of poses and actions has received relatively scarce attention, and forms a key goal of this thesis. Drawing on results from 2D projective invariance theory and 3D mutual invariants, we present three different approaches of varying degrees of generality, for human action representation and recognition. A detailed analysis of the approaches reveals key challenges, which are circumvented by enforcing spatial and temporal coherency constraints. An extensive performance evaluation of the approaches on 2D projections of motion capture data and manually segmented real image sequences demonstrates that in addition to viewpoint changes, the approaches are able to handle well, varying speeds of execution of actions (and hence different frame rates of the video), different subjects and minor variabilities in the spatiotemporal dynamics of the action. Next, we present a method for recovering the body-centric coordinates of key joints and parts of a canonically scaled human body, given an image of the body and the point correspondences of specific body joints in an image. This problem is difficult to solve because of body articulation and perspective effects. To make the problem tractable, previous researchers have resorted to restricting the camera model or requiring an unrealistic number of point correspondences, both of which are more restrictive than necessary. We present a solution for the general case of a perspective uncalibrated camera. Our method requires that the torso does not twist considerably, an assumption that is usually satisfied for many poses of the body. We evaluate the quantitative performance of the method on synthetic data and the qualitative performance of the method on real images taken with unknown cameras and viewpoints. Both these evaluations show the effectiveness of the method at recovering the pose of the human body.Item Data Centric Cache Measurement Using Hardware and Software Instrumentation(2004-04-30) Buck, Bryan Roger; Hollingsworth, Jeffrey K; Computer ScienceThe speed at which microprocessors can perform computations is increasing faster than the speed of access to main memory, making efficient use of memory caches ever more important. Because of this, information about the cache behavior of applications is valuable for performance tuning. To be most useful to a programmer, this information should be presented in a way that relates it to data structures at the source code level; we will refer to this as data centric cache information. This disser-tation examines the problem of how to collect such information. We describe tech-niques for accomplishing this using hardware performance monitors and software in-strumentation. We discuss both performance monitoring features that are present in existing processors and a proposed feature for future designs. The first technique we describe uses sampling of cache miss addresses, relat-ing them to data structures. We present the results of experiments using an imple-mentation of this technique inside a simulator, which show that it can collect the de-sired information accurately and with low overhead. We then discuss a tool called Cache Scope that implements this on actual hardware, the Intel Itanium 2 processor. Experiments with this tool validate that perturbation and overhead can be kept low in a real-world setting. We present examples of tuning the performance of two applica-tions based on data from this tool. By changing only the layout of data structures, we achieved approximately 24% and 19% reductions in running time. We also describe a technique that uses a proposed hardware feature that pro-vides information about cache evictions to sample eviction addresses. We present results from an implementation of this technique inside a simulator, showing that even though this requires storing considerably more data than sampling cache misses, we are still able to collect information accurate enough to be useful while keeping overhead low. We discuss an example of performance tuning in which we were able to reduce the running time of an application by 8% using information gained from this tool.Item On the Generalized Tower of Hanoi Problem I: An Introduction to Cluster Spaces(2004-05-04) Rukhin, Andrey; Gasarch, William; Applied Mathematics and Scientific ComputationIn this thesis, we examine the Tower of Hanoi puzzle with p posts (p >= 3) and n disks (n in N). We examine the puzzle in the context of a cluster space: a hierarchical partitioning of the space of all possible disk configurations. This thesis includes two theorems that address the topic of minimal paths connecting disk configurations.Item Temporal Treemaps for Visualizing Time Series Data(2004-05-12) Chintalapani, Gouthami; Shneiderman, Ben; Plaisant, Catherine; Systems EngineeringTreemap is an interactive graphical technique for visualizing large hierarchical information spaces using nested rectangles in a space filling manner. The size and color of the rectangles show data attributes and enable users to spot trends, patterns or exceptions. Current implementations of treemaps help explore time-invariant data. However, many real-world applications require monitoring hierarchical, time-variant data. This thesis extends treemaps to interactively explore time series data by mapping temporal changes to color attribute of treemaps. Specific contributions of this thesis include: · Temporal treemaps for exploring time series data through visualizing absolute or relative changes, animating them over time, filtering data items, and discovering trends using time series graphs. · The design and implementation of extensible software modules based on systems engineering methodologies and object-oriented approach. · Validation through five case studies: health statistics, web logs, production data, birth statistics, and help-desk tickets; future improvements identified from the user feedback.Item A Lattice Kinetic Scheme with Grid Refinement for 3D Resistive Mangetohydrodynamics(2004-05-26) Osborn, Bryan Russell; Dorland, William D; Applied Mathematics and Scientific ComputationWe develop, analyze, and numerically test a 3D lattice kinetic scheme for the resistive magnetohydrodynamic (MHD) equations. This scheme is based on the square D3Q19 lattice for the fluid and the square D3Q7 lattice for the magnetic field. The scheme is shown to be consistent with the MHD equations in the low-Mach, high-beta limit. We numerically test the scheme in a pseudo-3D implementation by examining its reproduction of linear MHD eigenmodes as well as its performance on the non-linear Orszag-Tang problem. Results show that the waves are correctly reproduced and that the code has second-order convergence in time step and grid spacing. A multi-block refinement algorithm is then tested, and its convergence properties are examined for the non-linear Orszag-Tang problem. We conclude that this multi-block refinement algorithmpreviously only applied to hydrodynamic lattice kinetic schemescan be used in conjunction with MHD lattice kinetic schemes.Item Performance Analysis of a Multi-Class, Preemptive Priority Call Center with Time-Varying Arrivals(2004-06-08) Ridley, Ahmad; Fu, Michael C.; Massey, William A.; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)We model a call center as a an $M_{t}/M/n$, preemptive-resume priority queue with time-varying arrival rates and two priority classes of customers. The low priority customers have a dynamic priority where they become high priority if their waiting time exceeds a given service-level time. The performance of the call center is estimated by the mean number in the system and mean virtual waiting time for both classes of customers. We discuss some analytical methods of measuring the performance of call center models, such as Laplace transforms. We also propose a more-robust fluid approximations method to model a call center. The accuracy of the performance measures from the fluid approximation method depend on an asymptotic scheme developed by Halfin and Whitt. Here, the offered load and number of servers are scaled by the same factor, which maintains a constant system utilization. The fluid approximations provide estimates for the mean number in system and mean virtual waiting time. The approximations are solutions of a system of nonlinear differential equations. We analyze the accuracy of the fluid approximations through a comparison with a discrete-event simulation of a call center. We show that for a large enough scale factor, the estimates of the performance measures derived from the fluid approximations method are relatively close to those from the discrete-event simulation. Finally, we demonstrate that these approximations remain relatively close to the simulation estimates as the system state varies between under-loaded and over-loaded status.