Technical Reports from UMIACS

Permanent URI for this collectionhttp://hdl.handle.net/1903/7

Browse

Search Results

Now showing 1 - 10 of 902
  • Thumbnail Image
    Item
    An Analysis of Derived Belief Strategy’s Performance in the 2005 Iterated Prisoner’s Dilemma Competition
    (2006-02-19) Au, Tsz-Chiu
    This report analyze the performance of the Derived Belief Strategy (DBS) in the 2005 Iterated Prisoner’s Dilemma Competition [1]. Our technique is to remove the participants from the computation of the average scores, as if they have never participated in the competition.
  • Thumbnail Image
    Item
    Learning and Verification of Safety Parameters for Airspace Deconfliction
    (2009-11-30) Rebguns, Antons; Green, Derek; Levine, Geoffrey; Kuter, Ugur; Spears, Diana
    We present a Bayesian approach to learning flexible safety constraints and subsequently verifying whether plans satisfy these constraints. Our approach, called the Safety Constraint Learner/Checker (SCLC), is embedded within the Generalized Integrated Learning Architecture (GILA), which is an integrated, heterogeneous, multi-agent ensemble architecture designed for learning complex problem solving techniques from demonstration by human experts. The SCLC infers safety constraints from a single expert demonstration trace, and applies these constraints to the solutions proposed by the agents in the ensemble. Blame for constraint violations is then transmitted to the individual learning/planning/reasoning agents, thereby facilitating new problem-solving episodes. We discuss the advantages of the SCLC and demonstrate empirical results on an Airspace Planning and Deconfliction Task, which was a benchmark application in the DARPA Integrated Learning Program.
  • Thumbnail Image
    Item
    Extending Phrase-Based Decoding with a Dependency-Based Reordering Model
    (2009-12-23) Hunter, Tim; Resnik, Philip
    Phrase-based decoding is conceptually simple and straightforward to implement, at the cost of drastically oversimplified reordering models. Syntactically aware models make it possible to capture linguistically relevant relationships in order to improve word order, but they can be more complex to implement and optimise. In this paper, we explore a new middle ground between phrase-based and syntactically informed statistical MT, in the form of a model that supplements conventional, non-hierarchical phrase-based techniques with linguistically informed reordering based on syntactic dependency trees. The key idea is to exploit linguistically-informed hierarchical structures only for those dependencies that cannot be captured within a single flat phrase. For very local dependencies we leverage the success of conventional phrase-based approaches, which provide a sequence of target-language words appropriately ordered and ready-made with the appropriate agreement morphology. Working with dependency trees rather than constituency trees allows us to take advantage of the flexibility of phrase-based systems to treat non-constituent fragments as phrases. We do impose a requirement --- that the fragment be a novel sort of "dependency constituent" --- on what can be translated as a phrase, but this is much weaker than the requirement that phrases be traditional linguistic constituents, which has often proven too restrictive in MT systems.
  • Thumbnail Image
    Item
    Scaling Single-Program Performance on Large-Scale Chip Multiprocessors
    (2009-11-25) Wu, Meng-Ju; Yeung, Donald
    Due to power constraints, computer architects will exploit TLP instead of ILP for future performance gains. Today, 4-8 state-of-the-art cores or 10s of smaller cores can fit on a single die. For the foreseeable future, the number of cores will likely double with each successive processor generation. Hence, CMPs with 100s of cores-so-called large-scale chip multiprocessors (LCMPs)-will become a reality after only 2 or 3 generations. Unfortunately, simply scaling the number of on-chip cores alone will not guarantee improved performance. In addition, effectively utilizing all of the cores is also necessary. Perhaps the greatest threat to processor utilization will be the overhead incurred waiting on the memory system, especially as on-chip concurrency scales to 100s of threads. In particular, remote cache bank access and off-chip bandwidth contention are likely to be the most significant obstacles to scaling memory performance. This paper conducts an in-depth study of CMP scalability for parallel programs. We assume a tiled CMP in which tiles contain a simple core along with a private L1 cache and a local slice of a shared L2 cache. Our study considers scaling from 1-256 cores and 4-128MB of total L2 cache, and addresses several issues related to the impact of scaling on off-chip bandwidth and on-chip communication. In particular, we find off-chip bandwidth increases linearly with core count, but the rate of increase reduces dramatically once enough L2 cache is provided to capture inter-thread sharing. Our results also show for the range 1-256 cores, there should be ample on-chip bandwidth to support the communication requirements of our benchmarks. Finally, we find that applications become off-chip limited when their L2 cache miss rates exceed some minimum threshold. Moreover, we expect off-chip overheads to dominate on-chip overheads for memory intensive programs and LCMPs with aggressive cores.
  • Thumbnail Image
    Item
    The DSPCAD Integrative Command Line Environment: Introduction to DICE Version 1
    (2009-11-23) Bhattacharyya, Shuvra S.; Kedilaya, Soujanya; Plishker, William; Sane, Nimish; Shen, Chung-Ching; Zaki, George
    DICE (the DSPCAD Integrative Command Line Environment) is a package of utilities that facilitates efficient management of software projects. Key areas of emphasis in DICE are cross-platform operation, support for projects that integrate heterogeneous programming languages, and support for applying and integrating different kinds of design and testing methodologies. The package is being developed at the University of Maryland to facilitate the research and teaching of methods for implementation, testing, evolution, and revision of engineering software. The package is also being developed as a foundation for developing experimental research software for techniques and tools in the area of computer-aided design (CAD) of digital signal processing (DSP) systems. The package is intended for cross-platform operation, and is currently being developed and used actively on the Windows (equipped with Cygwin), Solaris, and Linux platforms. This report provides an introduction to DICE, and provides background on some of the key features in DICE. This report also gives a brief introduction to dicelang, which is a plug-in package for DICE that provides additional utilities, libraries, and tools for managing software projects in specific programming languages.
  • Thumbnail Image
    Item
    Computation of the head-related transfer function via the boundary element method and representation via the spherical harmonic spectrum
    (2009-05-15) Gumerov, Nail A.; O'Donovan, Adam; Duraiswami, Ramani; Zotkin, Dmitry N.
    The head-related transfer function (HRTF) is computed using the fast multipole accelerated boundary element method. For efficiency, the HRTF is computed using the reciprocity principle, by placing a source at the ear and computing its field. Analysis is presented to modify the boundary value problem accordingly. To compute the HRTF corresponding to different ranges via a single computation, a compact and accurate representation of the HRTF, termed the spherical spectrum, is developed. Computations are reduced to a two stage process, the computation of the spherical spectrum and a subsequent evaluation of the HRTF. This representation allows easy interpolation and range extrapolation of HRTFs. HRTF computations are performed for the range of audible frequencies up to 20 kHz for several models including a sphere, human head models (for the “Fritz” and “Kemar”), and head and torso model (the Kemar manikin). Comparisons between the different cases and analysis of limiting cases is provided. Comparisons with the computational data of other authors and available experimental data are conducted and show satisfactory agreement for the frequencies for which reliable experimental data is available. Our results show that, given a good mesh it is feasible to compute the HRTF over the full audible range on a regular personal computer.
  • Thumbnail Image
    Item
    Fast Iterative Solver for Convection-Diffusion Systems with Spectral Elements
    (2009-03-04) Lott, P. Aaron; Elman, Howard
    We introduce a solver and preconditioning technique based on Domain Decomposition and the Fast Diagonalization Method that can be applied to tensor product based discretizations of the steady convection-diffusion equation. The method is based on iterative substructuring where fast diagonalization is used to efficiently eliminate the interior degrees of freedom and subsidiary subdomain solves. We demonstrate the effectiveness of this method in numerical simulations using a spectral element discretization.
  • Thumbnail Image
    Item
    Boundary Conditions in Approximate Commutator Preconditioners for the Navier-Stokes Equations
    (2009-02) Elman, Howard C.; Tuminaro, Ray S.
    Boundary conditions are analyzed for a class of preconditioners used for the incompressible Navier-Stokes equations. We consider pressure convection-diffusion preconditioners [8,12] as well as least-square commutator methods [2,3], both of which rely on commutators of certain differential operators. The effectiveness of these methods has been demonstrated in various studies, but both methods also have some deficiencies. For example, the pressure convection-diffusion preconditioner requires the construction of a Laplace and a convection--diffusion operator, together with some choices of boundary conditions. These boundary conditions are not well understood, and a poor choice can critically affect performance. This paper looks closely at properties of commutators near domain boundaries. We show that it is sometimes possible to choose boundary conditions to force the commutators of interest to be zero at boundaries, and this leads to a new strategy for choosing boundary conditions for the purpose of specifying preconditioning operators. With the new preconditioners, Krylov subspace methods display noticeably improved performance for solving the Navier-Stokes equations; in particular, mesh-independent convergence rates are observed for some problems for which previous versions of the methods did not exhibit this behavior.
  • Thumbnail Image
    Item
    An Approach to Improving Parametric Estimation Models in the Case of Violation of Assumptions Based upon Risk Analysis
    (2008-12) Sarcia1, Salvatore Alessandro; Basili, Victor Robert; Cantone, Giovanni
    In this work, we show the mathematical reasons why parametric models fall short of providing correct estimates and define an approach that overcomes the causes of these shortfalls. The approach aims at improving parametric estimation models when any regression model assumption is violated for the data being analyzed. Violations can be that, the errors are x-correlated, the model is not linear, the sample is heteroscedastic, or the error probability distribution is not Gaussian. If data violates the regression assumptions and we do not deal with the consequences of these violations, we cannot improve the model and estimates will be incorrect forever. The novelty of this work is that we define and use a feed-forward multi-layer neural network for discrimination problems to calculate prediction intervals (i.e. evaluate uncertainty), make estimates, and detect improvement needs. The primary difference from traditional methodologies is that the proposed approach can deal with scope error, model error, and assumption error at the same time. The approach can be applied for prediction, inference, and model improvement over any situation and context without making specific assumptions. An important benefit of the approach is that, it can be completely automated as a stand-alone estimation methodology or used for supporting experts and organizations together with other estimation techniques (e.g., human judgment, parametric models). Unlike other methodologies, the proposed approach focuses on the model improvement by integrating the estimation activity into a wider process that we call the Estimation Improvement Process as an instantiation of the Quality Improvement Paradigm. This approach aids mature organizations in learning from their experience and improving their processes over time with respect to managing their estimation activities. To provide an exposition of the approach, we use an old NASA COCOMO data set to (1) build an evolvable neural network model and (2) show how a parametric model, e.g, a regression model, can be improved and evolved with the new project data.
  • Thumbnail Image
    Item
    How friendship links and group memberships affect the privacy of individuals in social networks
    (2008-07-01) Zheleva, Elena; Getoor, Lise
    In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit a social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose a simple yet powerful model that uses group features and group memberships of users to perform multi-value classification. We compare its efficacy against several other classification approaches. Our results show that even in the case when there is an option for making profile attributes private, if links and group affiliations are known, users' privacy in social networks may be compromised. On a dataset from a well-known social-media website, we could easily recover the sensitive attributes for half of the private-profile users with a high accuracy when as much as half of the profiles are private. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks. We conclude with a discussion of our findings and the broader applicability of our proposed model.