Institute for Systems Research

Permanent URI for this communityhttp://hdl.handle.net/1903/4375

Browse

Search Results

Now showing 1 - 10 of 48
  • Thumbnail Image
    Item
    The MDLe Engine -- A Software Tool for Hybrid Motion Control
    (2000) Hristu, Dimitrios; Krishnaprasad, Perinkulam S.; Andersson, Sean B.; Zhang, F.; Sodre, P.; D'Anna, L.; ISR; CDCSS
    One of the important but often overlooked practical challenges in motion control for robotics and other autonomous machines has to do with the implementation of theoretical tools into software that will allow the system to interact effectively with the physical world. More often than not motion control programs are machine-specific and not reusable, even when the underlying algorithm does not require any changes.

    The work on Motion Description Languages (MDL) has been an effort to formalize a general-purpose robot programming language that allows one to incorporate both switching logic and differential equations. Extended MDL (MDLe) is a device-independent programming language for hybrid motion control, accommodating hybrid controllers, multi-robot interactions and robot-to-robot communications.

    The purpose of this paper is to describe the "MDLe engine," a software tool that implements the MDLe language.

    We have designed a basic compiler/software foundation for writing MDLe code. We provide a brief description of the MDLe syntax, implementation architecture, and functionality. Sample programs are presented together with the results of their execution on a set of physical and simulated mobile robots.

  • Thumbnail Image
    Item
    A Hierarchical Structure For Finite Horizon Dynamic Programming Problems
    (2000) Zhang, Chang; Baras, John S.; Baras, John S.; ISR; CSHCN
    In dynamic programming (Markov decision) problems, hierarchicalstructure (aggregation) is usually used to simplify computation. Most research on aggregation ofMarkov decision problems is limited to the infinite horizon case, which has good tracking ability. However, in reallife, finite horizon stochastic shortest path problems are oftenencountered.

    In this paper, we propose a hierarchical structure to solve finite horizon stochastic shortest pathproblems in parallel. In general, the approach reducesthe time complexity of the original problem to a logarithm level, which hassignificant practical meaning.

  • Thumbnail Image
    Item
    A Channel Probing Scheme for Wireless Networks
    (2000) Zhu, Chenxi; Corson, M. Scott; Corson, M. Scott; ISR
    A channel probing scheme for wireless networks is presented. By transmittinga probing signal in a channel and measuring the signal-to-interferenceratio (SIR), a link can estimate the channel condition and predict therequired transmission power without fully powering up. The channel probingscheme can be used as part of a distributed channel allocation algorithm,and simulations have shown that it outperforms some other comparableschemes.
  • Thumbnail Image
    Item
    REU Report: Development of an Agent-Based Factory Shop Floor Simulation Tool
    (1999) Whangbo, Albert J.; Lin, Edward; Herrmann, Jeffrey; ISR
    Manufacturing systems of the future are expected to be agile and failure-tolerant. Current simulation tools are not well equipped to model these dynamically changing systems. Agent-based simulation represents an attractive alternative to traditional simulation techniques. This project aims to develop software for agent-based factory shop floor simulation. The current version of the factory simulation software is implemented in Java. Although the program lacks important features of a decision support tool, it provides a flexible agent-based framework for modeling and testing shop floor configurations.
  • Thumbnail Image
    Item
    Architecture, Design, Simulation and Performance Evaluation for Implementing ALAX -- The ATM LAN Access Switch Integrating the IEEE 1355 Serial Bus
    (1997) Charleston, Giles C.; Makowski, A.; ISR; CSHCN
    IEEE 1355 is a serial bus standard for Heterogeneous Inter Connect (HIC) developed for "enabling high-performance, scalable, modular and parallel systems to be built with low system integration cost." However to date, few systems have been built around this standard specification. In this thesis, we propose ALAX -- an internetworking switching device based on IEEE 1355. The aim of the thesis is two-fold. First, we discuss and summarize research works leading to the architecture, design and simulation development for ALAX; we synthesize and analyze relevant data collected from the simulation experiments of the 4- port model of ALAX (i.e., 4-by-4 with four input and output queues) -- these activities were conducted during the 2-year length of the project. Secondly, we expand the original 4-by-4 size of the ALAX simulation model into 8-, 12- and 16-port models and present and interpret the outcomes. Thus, overall we establish a performance assessment of the ALAX switch, and also identify several critical design measurements to support the ALAX prototype implementation. We review progresses made in Local Area Networks (LANs) where traditional software-enabled bridges or routers are being replaced in many instances by hardware-enabled switches to enhance network performance. Within that context, ATM (Asynchronous Transfer Mode) technology emerges as an alternative for the next generation of high-speed LANs. Hence, ALAX incarnates our effective approach to build an ATM-LAN interface using a suitable switching platform. ALAX currently provides the capability to conveniently interconnect legacy Ethernet and ATM- based networks. Its distributed architecture features a multi- processor environment of T9000 transputers with parallel processing capability, a 32-by-32 way non-blocking crossbar fabric (C104 chipset) partitioned into Transport (i.e., Data) and Control planes, and many other modules interlaced with IEEE 1355- based connectors. It also employs existing and emerging protocols such as LANE (LAN Emulation), IEEE 802.3 and SNMP (Simple Network Management Protocol). We provide the component breakdown of the ALAX simulation model based on Optimized Network Engineering Tools (OPNET). The critical parameters for the study are acceptable processor speeds and queuing sizes of shared memory buffer at each switch port. The performance metric used is the end-to-end packet delay. Finally, we end the thesis with conclusive recommendations pertaining to performance and design measurement, and a brief summary of areas for further research study.
  • Thumbnail Image
    Item
    An Integrated Rapid Prototyping and Vacuum Casting System for Medical Applications
    (1997) Surana, Rena; ISR
    Evaluation of products in the design stage has played a critical role in product development. Methods to build functional prototypes have been a deciding factor for designverification. As an emerging technology, rapid prototyping is revolutionizingthe process of building prototypes. However, material limitations and highcosts call for further expansion of this technology focusing on batch productionof prototypes with material options.

    Recognizing the challenge to produce multiple prototypes, this thesisresearch aims to integrate three state-of-the-art technologies: 3D solid modeling, rapid prototyping, and vacuum casting. A system architecture combining hardwareand computer software is designed and implemented.

    The system utilizes computergraphics to construct a 3D model of an object through visualitzion. A softwaresystem, Maestro, processes a CAD file, generates support structures, and creates slice data to build prototypes by a stereolithography process. Thebuilt part serves as a master pattern for creation of a silicone rubber mold in a vacuum environment. This vacuum environment creates a material flow ratethat ensures replicas with superior quality in regards to surface finish anddimensional accuracy. This mold is then used to cast multiple replicas ofthe master pattern.

    The unique contribution of this research is the application of thedeveloped system to meet a specific need in medical research - an effort torestore sight in blind individuals by implanting electordes in the visualcortex. Six replcas of a monkey skull are produced for surgeions to practicesurgical procedures. Image data obtained from CT scans of a mondkey head are successfully used to contruct a 3D solid model to fabricate a batch of six functional prototypes. The superior quality of these replicas hasoffered a unique opportunity for exploratory surgery in efforts to restoresight.

  • Thumbnail Image
    Item
    Advances in High Performance Knowledge Representation
    (1996) Stoffel, K.; Taylor, M.; Hendler, James A.; Saltz, J.; Andersen, William; ISR
    Real world applications are demanding that KR systems provide support for knowledge bases containing millions of assertions. We present Parka-DB, a high-performance reimplementation of the Parka KR language which uses a standard relational DBMS. The integration of a DBMS and the Parka KR language allows us to efficiently support complex queries on extremely large KBs using a single processor, as opposed to our earlier massively parallel system. In addition, the system can make good use of secondary memory, with the whole system needing less than 16MB of RAM to hold a KB of over 2,000,000 assertions. We demonstrate empirically that this reduction in primary storage requires only about 10% overhead in time, and decreases the load time of very large KBs by more than two orders of magnitude.
  • Thumbnail Image
    Item
    Optimal Graph Algorithms on Linear Arrays
    (1994) Huang, Ying-Min; JaJa, J.F.; ISR
    We consider a computational model based on a fixed-size linear array. Based on this model, we develop a number of techniques that lead to optimal algorithms for a number of combinatorial problems. Specifically, we develop optimal algorithms for determining the connected components, the minimum spanning forest, the bridges, and the articulation points of an undirected graph, and the pseudo-open ear decomposition, and the biconnected components of a connected undirected graph. We also develop optimal algorithms to solve the list ranking and the convex hull problems. The input to each of our graph algorithms is a set of edges, distributed evenly among the processors. Our graph algorithms run in Theta(m/p + n + p) time, where m is the number of edges, n is the number of vertices, and p is the number of processors, which is between 1 and m. The convex hull algorithm runs in Theta((n log n)/p + n ) time, which can be shown to be the best possible on a p-processor linear array for p between 1 and n. Further, we implement our Connected Components algorithm on the ParsyTec SuperCluster Transputers, which contains 32 IMS T800 transputers. The implementations are done with two different logical network configurations and data broadcasting topologies corresponding to a pipeline model and a tree model. We construct the physical interconnection network among the processors as a linear array. We evaluate the algorithm performance on each of these two models and make comparisons between the two. We have run experiments on many graphs of different sizes and processors, and the results show communication costs dominate whenever we have more than four processors for problems of sizes up to 100 vertices or 4000 edges. Furthermore, the algorithm based on the binary tree logical network runs better than the one based on the linear array logical network.
  • Thumbnail Image
    Item
    VLSI Design of High-Speed Time-Recursive 2-D DCT/IDCT Processor for Video Applications
    (1994) Srinivasan, V.; Liu, K.J. Ray; ISR
    In this paper we present a full-customer VLSI design of high- speed 2-D DCT/IDCT Processor based on the new class of time- recursive algorithms and architectures which has never been implemented to prove its performance. We show that the VLSI implementation of this class of DCT/IDCT algorithms can easily meet the high-speed requirements of HDTV due to its modularity, regularity, local connectivity, and scalability. Our design of the 8 x 8 DCT/IDCT can operate at 50 MHz with a 400 Mbps throughput based on a very conservative estimate under 1.2 CMOS technology. In comparison to the existing designs, our approach offers many advantages that can be further explored for even higher performance.
  • Thumbnail Image
    Item
    Declustering R-Tree on Multi-Computer Architectures
    (1994) Koudas, N.; Faloutsos, Christos; Kamel, Ibrahim; ISR
    We study a method to decluster a spatial access method (and specifically an R-tree) on a shared-nothing multi-computer architecture [9]. Our first step is to propose a software architecture, with the top levels of the R-tree on the 'master'server' and the leaf nodes distributed across the servers. Nest, we study the optimal capacity of leaf nodes, or 'chunk size'. We express the response time on range queries as a function of the 'chunk size', and we show how to optimize it. This formula assumes that the 'chunks' are perfectly declustered. We propose to use the Hilbert curve to achieve such a good declustering.

    Finally, we implemented our method on a network of workstations and we compared the experimental and the theoretical results. The conclusions are that (a) our formula for the response time is accurate (the maximum relative error was 30%; the typical error was in the vicinity of 10-15%) (b) the Hilbert-based declustering consistently outperforms a random declustering (c) most importantly, although the optimal chunk size depends on several factors (database size, size of the query, speed of the network), a safe choice for it is 1 page (whichever is the page size of the operating system). We show analytically and experimentally that a chunk size of 1 page gives either optimal or close to optimal results, for a wide range of the parameters.