Computer Science Department Technical Reports
Permanent URI for this community
Browse
Browsing Computer Science Department Technical Reports by Issue Date
Now showing 1 - 20 of 1187
Results Per Page
Sort Options
- ItemInvariant Subspaces and Capital Punishment (A Participatory Paper)(1995-02-06) Stewart, G. W.The notion of invariant subspaces is useful in a number of theoretical and practical applications. In this paper we give an elementary treatment of invariant subspaces that stresses their connection with simple eigenvalues and their eigenvectors.
- ItemHypothesis Testing with Errors in the Variables(1995-02-06) David, Nancy; Stewart, G. W.In this paper we give reason to hope that errors in regression variables are not as harmful as one might expect. Specifically, we will show that although the errors can change the values of the quantities one computes in a regression analysis, under certain conditions they leave the distributions of the quantities approximately unchanged.
- ItemA Two-Stage Iteration for Solving Nearly Uncoupled Markov Chains(1995-02-06) Stewart, G. W.; Stewart, W. J.; McAllister, D. F.This paper is concerned with an iteration for determining the steady-state probability vector of a nearly uncoupled Markov Chain. The states of these chains can be partitioned into aggregates with low probabilities of transitions between aggregates. The iteration consists of alternating block Gauss--Seidel iterations with Rayleigh--Ritz refinements. Under natural regularity conditions, the composite iteration reduces the error by a factor proportional to the size of the coupling between aggregates, so that the more loosely the chain is coupled, the faster the convergence.
- ItemAn Iterative Method for Solving Linear Inequalities(1995-02-06) Stewart, G. W.This paper describes and analyzes a method for finding nontrivial solutions of the inequality $Ax \geq 0$, where $A$ is an $m \times n$ matrix of rank $n$. The method is based on the observation that a certain function $f$ has a unique minimum if and only if the inequality {\it fails to have} a nontrivial solution. Moreover, if there is a solution, an attempt to minimize $f$ will produce a sequence that will diverge in a direction that converges to a solution of the inequality. The technique can also be used to solve inhomogeneous inequalities and hence linear programming problems, although no claims are made about competitiveness with existing methods.
- ItemA Study of File Manipulation by Novices Using Commands vs. Direct Manipulation(1995-04-13) Margono, Sepeedeh; Shneiderman, BenThere are three basic interactive styles of control in human interfaces with computers: command, menu, and direct manipulation. In the past few years, these three styles have become the subject of many studies. However, few comparisons have been done between interfaces that use direct manipulation and command styles. This experiment compares file manipulation operations on the Apple Macintosh, which has a direct manipulation interface, with the IBM PC with MS-DOS, which has the command interface. After a brief training period, novices accomplished file manipulation tasks more rapidly, with fewer errors and greater satisfaction with the Apple Macintosh. Problems arising for both versions are discussed and suggestions for improvements are made. (Also cross-referenced as CAR-TR-264)
- ItemLife cycle of user interface techniques: The DJJ information system design Process(1998-10-15) Rose, Anne; Ellis, Jason; Plaisant, Catherine; Greene, StephanTo take advantage of todays technology, many organizations are migrating from their legacy systems. With help from the Human-Computer Interaction Laboratory (HCIL) and Cognetics Corporation, the Maryland Department of Juvenile Justice (DJJ) is currently undergoing an effort to redesign their information system to take advantage of graphical user interfaces. As a research lab, HCIL identifies interesting research problems and then prototypes solutions. As a project matures, the exploratory prototypes are adapted to suit the end product requirements. This case study describes the life cycle of three DJJ prototypes: (1) LifeLines, which uses time lines to display an overview of a youth in one screen, (2) the DJJ Navigator, which helps manage individual workloads by displaying different user views, and (3) the ProgramFinder, a tool for selecting the best program for a youth. (Also cross-referenced as CAR-TR-826)
- ItemDeterming Schedules based on Performance Estimation(1998-10-15) Kelly, Wayne; Pugh, WilliamIn previous work, we presented a framework for unifying iteration reordering transformations such as loop interchange, loop distribution, loop skewing and statement reordering. The framework provides a uniform way to represent and reason about transformations. However, it does not provide a way to decide which transformation(s) should be applied to a given program. This paper describes a way to make such decisions within the context of the framework. The framework is based on the idea that a transformation can be represented as a schedule that maps the original iteration space to a new iteration space. We show how we can estimate the performance of a program by considering only the schedule from which it was produced. We also show how to produce an upper bound on performance given only a partially specified schedule. Our ability to estimate performance directly from schedules and to do so even for partially specified schedules allows us to efficiently find schedules which will produce good code. (Also cross-referenced as UMIACS-TR-93-67)
- ItemActive Logic and Heim's Rules for Updating Discourse Context(1998-10-15) Gurney, John; Perlis, Don; Purang, KhemdutDiscourse unfolds in time, giving rise to a cascade of belief changes in the listener. Yet this temporal evolution of discourse and belief is typically ignored in theoretical treatments of discourse. It has been claimed (see Soames~\cite{soames:presuppositions}) that Heim's~\cite{heim:projection_problem} theory of discourse context accounts for non-implicative discourse updating. We will present a new non-implicative discourse that cannot be accounted for with Heim's use of global or local accommodation and which appears to require attention to \emph{evolution} of discourse. We use this example to motivate remaking Heim's update function, aimed toward a unified approach to discourse---one in which Heim's rules for discourse updating can account for more of the problem cases for the theory of discourse context. These rules and the revised update function can then serve as principles that constrain the building of representations for discourse context (such as the Discourse Representation Structures, of Discourse Representation Theory, ~\cite{kamp:reyle}). We propose \emph{active logic} as a convenient tool for executing the required inferences (as called for by our revised version of Heim's update function) as the discourse evolves through time. (Also cross-referenced as UMIACS-TR-96-43)
- ItemThe Loading Time Scheduling Problem(1998-10-15) Bhatia, Randeep; Khuller, Samir; Naor, Joseph (Seffi)In this paper we study precedence constrained scheduling problems, where the tasks can only be executed on a specified subset of the machines. Each machine has a loading time that is incurred only for the first task that is scheduled on the machine in a particular run. This basic scheduling problem arises in the context of machining on numerically controlled machines, query optimization in databases, and in other artificial intelligence applications. We give the first non-trivial approximation algorithm for this problem. We also prove non-trivial lower bounds on best possible approximation ratios for these problems. These improve on the non-approximability results that are implied by the non-approximability results for the shortests common supersequence problem. We use the same algorithmic technique to obtain approximation algorithms for a problem arising in the context of code generation for parallel machines, and for the weighted shortest common supersequence problem.
- ItemCodex, Memex, Genex: The pursuit of transformational technologies(1998-10-15) Shneiderman, BenHandwritten codexes or printed books transformed society by allowing users to preserve and transmit information. Today, leather-bound volumes and illuminated manuscripts are giving way to animated image maps and hot links. Vannevar Bush's memex has inspired the World Wide Web, which provides users with vast information resources and convenient communications. In looking to the future, we might again transform society by building genexes -- generators of excellence. Such inspirational environments would empower personal and collaborative creativity by enabling users to: - collect information from an existing domain of knowledge, - create innovations using advanced tools, - consult with peers or mentors in the field, and then - disseminate the results widely. This paper describes how a framework for an integrated set of software tools might support this four-phase model of creativity in science, medicine, the arts, and beyond. Current initiatives are positive and encouraging, but they do not work in an integrated fashion, often miss vital components, and are frequently poorly designed. A well-conceived and clearly-stated framework could guide design efforts, coordinate planning, and speed development. (Also cross-referenced as UMIACS-TR-97-89)
- ItemInvestigating Touchscreen Typing: the Effect of Keyboard Size on Typing Speed(1998-10-15) Sears, Andrew; Revis, Doreen; Swatski, Janet; Crittenden, Rob; Shneiderman, BenTwo studies investigated the effect keyboard size has on typing speed and error rates for touchscreen keyboards using the lift-off strategy. A cursor appeared when users touched the screen and a key was selected when they lifted their finger from the screen. Four keyboard sizes were investigated ranging from 24.6 cm to 6.8 cm wide. Results indicate that novices can type approximately 10 words per minute (WPM) on the smallest keyboard and 20 WPM on the largest. Experienced users improved to 21 WPM on the smallest keyboard and 32 WPM on the largest. These results indicate that, although slower, small touchscreen keyboards can be used for limited data entry when the presence of a regular keyboard is not practical. Applications include portable pocket-sized or palmtop computers, messaging systems, and personal information resources. Results also suggest the increased importance of experience on these smaller keyboards. Research directions are suggested. (Also cross-referenced as CAR-TR-553)
- ItemUsing the Parka Parallel Knowledge Representation System (Version 3.2)(1998-10-15) Kettler, Brian; Andersen, William; Hendler, James; Luke, SeanParka is a symbolic, semantic network knowledge representation system that takes advantage of the massive parallelism of supercomputers such as the Connection Machine. The Parka language has many of the features of traditional semantic net/frame-based knowledge representation languages but also supports several kinds of rapid parallel inference mechanisms that scale to large knowledge-bases of hundreds of thousands of frames or more. Parka is intended for general-purpose use and has been used thus far to support A.I. systems for case-based reasoning and data mining. This document is a user manual for the current version of Parka, version 3.2. It describes the Parka language and presents some examples of knowledge representation using Parka. Details about the parallel algorithms, implementation, and empirical results are presented elsewhere. (Also cross-referenced as UMIACS-TR-95-68)
- ItemAn Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development(1998-10-15) Porter, Adam A.; Toman, C. A.; Siy, Harvey; Votta, Lawrence G.We are conducting a long-term experiment (in progress) to compare the costs and benefits of several different software inspection methods. These methods are being applied by professional developers to a commercial software product they are currently writing. Because the laboratory for this experiment is a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. This article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it, (2) to present our preliminary results and discuss their implications for both software practitioners and researchers, and (3) to discuss how we expect to modify the experiment in order to reduce potential risks to the project. For each inspection we randomly assign 3 independent variables: (1) the number of reviewers on each inspection team (1, 2 or 4), (2) the number of teams inspecting the code unit (1 or 2), and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection are randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection include inspection interval (elapsed time), total effort, and the defect detection rate. To date we have completed 34 of the planned 64 inspections. Our preliminary results challenge certain long-held beliefs about the most cost-effective ways to conduct inspections and raise some questions about the feasibility of recently proposed methods. (Also cross-referenced as UMIACS-TR-95-14)
- ItemDirection of Arrival and the Rank-Revealing URV Decomposition(1998-10-15) Boman, E. C.; Griffen, M. F.; Stewart, G. W.In many practical direction-of-arrival (DOA) problems the number of sources and their directions from an antenna array do not remain stationary. Hence a practical DOA algorithm must be able to track changes with a minimal number of snapshots. In this paper we describe DOA algorithms, based on a new decomposition, that are not expensive to compute or difficult to update. The algorithms are compared with algorithms based on the singular value decomposition (SVD). (Also cross-referenced as UMIACS-TR-91-166)
- ItemLarge Latent Semantic Indexing via a Semi-Discrete Matrix Decomposition(1998-10-15) Kolda, Tamara G.; O'Leary, Dianne P.With the electronic storage of documents comes the possibility of building search engines that can automatically choose documents relevant to a given set of topics. In information retrieval, we wish to match queries with relevant documents. Documents can be represented by the terms that appear within them, but literal matching of terms does not necessarily retrieve all relevant documents. There are a number of information retrieval systems based on inexact matches. Latent Semantic Indexing represents documents by approximations and tends to cluster documents on similar topics even if their term profiles are somewhat different. This approximate representation is usually accomplished using a low-rank singular value decomposition (SVD) approximation. In this paper, we use an alternate decomposition, the semi-discrete decomposition (SDD). For equal query times, the SDD does as well as the SVD and uses less than one-tenth the storage for the MEDLINE test set. (Also cross-referenced as UMIACS-TR-96-83)
- ItemModel Checking Concurrent Systems with Unbounded Integer Variables: Symbolic Representations, Approximations and Experimental Results(1998-10-15) Bultan, Tevfik; Gerber, Richard; Pugh, WilliamModel checking is a powerful technique for analyzing large, finite-state systems. In an infinite-state system, however, many basic properties are undecidable. In this paper, we present a new symbolic model checker which conservatively evaluates safety and liveness properties on infinite-state programs. We use Presburger formulas to symbolically encode a program's transition system, as well as its model-checking computations. All fixpoint calculations are executed symbolically, and their convergence is guaranteed by using approximation techniques. We demonstrate the promise of this technology on some well-known infinite-state concurrency problems. (Also cross-referenced as UMIACS-TR-98-07)
- ItemTransactional Client-Server Cache Consistency: Alternatives and Performance(1998-10-15) Franklin, Michael J.; Carey, Michael J.; Livny, MironClient-server database systems based on a page server model can exploit client memory resources by caching copies of pages across transaction boundaries. Caching reduces the need to obtain data from servers or other sites on the network. In order to ensure that such caching does not result in the violation of transaction semantics, a cache consistency maintenance algorithm is required. Many such algorithms have been proposed in the literature and, as all provide the same functionality, performance is a primary concern in choosing among them. In this paper we provide a taxonomy that describes the design space for transactional cache consistency maintenance algorithms and show how proposed algorithms relate to one another. We then investigate the performance of six of these algorithms, and use these results to examine the tradeoffs inherent in the design choices identified in the taxonomy. The insight gained in this manner is then used to reflect upon the characteristics of other algorithms that have been proposed. The results show that the interactions among dimensions of the design space can impact performance in many ways, and that classifications of algorithms as simply Pessimistic" or Optimistic" do not accurately characterize the similarities and differences among the many possible cache consistency algorithms. (Also cross-referenced as UMIACS-TR-95-84)
- ItemGlobal Value Propagation Through Value Flow Graph and Its Use in Dependence Analysis(1998-10-15) Maslov, VadimAs recent studies show, state-of-the-art parallelizing compilers produce no noticeable speedup for 9 out of 12 PERFECT benchmark codes, while the speedup that was reached by manually applying certain automatable techniques ranges from 10 to 50. In this paper we introduce the {\em Global Value Propagation} algorithm that unifies several of these techniques. Global propagation is performed using program abstraction called Value Flow Graph (VFG). VFG is an acyclic graph in which vertices and arcs are parametrically specified using F-relations. The distinctive features of our propagation algorithm are: (1) It propagates not only values carried by scalar variables, but also values carried by individual array elements. (2) We do not have to transform a program in order to use propagation results in program analysis. In this paper we focus on use of the VFG and global value propagation in array dataflow analysis. F-relations are used to represent values produced by uninterpreted function symbols that appear in dependence problems for non-affine program fragments. Global value propagation helps us to discover that some of these functions are in fact affine. (Also cross-referenced as UMIACS-TR-94-80)
- ItemNumerical Evaluation of Hierarchical QoS Routing(1998-10-15) Ahn, Sungjoon; Chittiappa, Gayathri; Shankar, A. UdayaWe develop a numerical evaluation method for adaptive hierarchical QoS routing, and demonstrate its viability by application to two networks. Our approach models aggregation and delayed feedback in a straightforward way, and is scalable to the large networks needed to evaluate hierarchical routing.
- ItemAn Accurate Time-Management Unit for Real-Time Processors(1998-10-15) Kailas, Krishnan K.; Agrawala, Ashok K.Time management is an important aspect of real-time computation. Traditional high performance processors provide little or no support for management of time. In this report, we propose a time-management unit which can greatly help improve the performance of a real-time system. The proposed unit can be added to any processor architecture without affecting its performance. We also explain how the unit helps to solve the clock synchronization problems in a real-time network. (Also cross-referenced as UMIACS-TR-97-28)