Electrical & Computer Engineering
Permanent URI for this community
Browse
Browsing Electrical & Computer Engineering by Issue Date
Now showing 1 - 20 of 1298
Results Per Page
Sort Options
Item The Application of the Gyrator Concept to Transistors(1956) Breeskin, Sol Daniel; Corcoran, George F.; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Item Sunspots, and the Solar Influence Upon High Frequency Radio Communications(1960) Jacobs, George; Reed, Henry R.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Item Study of a UHF Command Destruct Missile Antenna System(1960) Mullins, Elwood Hatcher; Schuchard, E.A.; Electrical and Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Item PRESENTATION OF A NEW HIGH-FREQUENCY COMMUNICATION SYSTEM PERFORMANCE PREDICTION TECHNIQUE(1965) Gatts, Thomas Fiscus Jr.; Reed, Henry R.; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)The prediction technique developed by the National Bureau of Standards has been used extensively by high-frequency communicators. An adaption of this technique is used to demonstrate the type of results obtained when applied to the Buffalo, N.Y. to Boston, Mass. (B/B) link for January and July 1965. A new prediction technique is presented which will allow the HF communicator to predict system performance between the maximum useable frequency (MUF) and the lowest useable frequency (LUF) and which is flexible enough to allow system parameter changes to be made and the effect on the overall system determined. The new technique is demonstrated by applying it to the B/B link for January and July 1965 and displaying the results in the form of relative gain contours, which show the effect on communication capability of reducing the LUF by increasing system gain and the increase in process gain that may be achieved for the purpose of raising the data rate or decreasing transmission error rate. Some of the many applications of the results of this new technique are presented. The results are used: (1) to facilitate the selection of necessary operating frequencies to provide communication throughout a 24-hour period, (2) to estimate the severity and length of occurrence of multipath, (3) to investigate possible frequency adaption, and (4) to investigate possible power adaption.Item A Set of Karnaugh Map Manipulation Computer Routines for Use in Logic Design(1968) Shub, Charles Martin; Marcovitz, Alan B.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)The Karnaugh map provides a convenient visual aid for the manipulation of switching functions for both the design engineer and the student of logic design. Algorithms for the minimization of switching functions by the manipulation of information displayed on a Karnaugh map are presented, along with a method of obtaining more information than was previously possible from the Karnaugh map. A dynamic, flexible, and easy to use collection of computer subroutines written in the MAD language to accomplish such manipulations as a subset of an entire logic design system of computer programs is described. A user's manual for the entire system is included, as well as descriptions of the programs used in conjunction with the map manipulation process. Several examples are included.Item Some Characteristics of Broadband Delta-Sigma Modulation(1971) Biegalski, Robert J.; Tretter, Steven; Electrical and Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)This paper presents an analysis using correlation techniques of an idealized Delta-Sigma Modulation system. An analytical assumption of errors with a marginally Gaussian distribution is shown to yield accurate results for broadband modulation with a maximum input-output cross correlation. It is also shown that this maximum is greatest for the degenerate case of only "hard limiting" with no feedback and no integration. A case of highly correla~ed inputs for Delta- Sigma Modulation is also discussed to compare it with broad- band performance and "hard limiting."Item Bleaching Kinetics of Visual Pigments(1977) Resnik, Judith Arlene; Zajac, Felix E. III; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)A rapid scanning microspectrophotometer (RMSP) has been developed and utilized to study the photoproducts resulting from the bleaching of rhodopsin in the isolated retina of the frog. The RMSP is capable of measuring absorption spectra at multiple wavelengths within the milliseconds and longer time domain. The unusual characteristic of the instrument is the use of a special cathode ray tube as a measuring light source. Spectral scanning is accomplished electronically, with a sampling interval of 600 microseconds for each waveband. A lock-in amplifier system enables the RMSP to be utilized as either a single or dual beam instrument. The results discussed in this dissertation have shown that hydrogen ion availability is a primary cofactor in determining the relative concentration of the metarhodopsin III photoproduct, with less appearing, in lieu of greater free retinal formation, at low pH levels. Metabolic factors have also been shown to influence the pathways of photoproduct decay. The most significant effect has been observed in nonacidic intracellular environments, with deficiencies in metabolic energy production also favoring the direct formation of free retinal from metarhodopsin II. The half-times of formation and decay of metarhodopsin III have also been observed to vary, depending on the extracellular environment of photoreceptor cells. In general, both halftimes tend to be greater when proportionately more metarhodopsin III 1s formed. The ratio of the two half-times, however, remains relatively constant, except in anoxic conditions, in which the decay half-time is significantly prolonged with respect to the formation half-time. Several problems associated with the control of experimental conditions have been discussed as they relate to photoproduct sequence and kinetics. The elimination of as many metabolic, ionic, and other insufficiently controlled conditions as possible has been pointed out as a necessary requirement for obtaining meaningful quantitative results. In addition, the baseline magnitude of the optical density of the retina, which is, in part, a quantification of light scattering, has been shown to be significantly larger in conditions of low intracellular pH or insufficient substrate supply. The utilization of this parameter as an indirect indicator of the probably sequence of photoproducts has been discussed. In conclusion, this research has provided a greater insight into the mechanisms affecting the later, slow photoproduct processes in isolated retinas. In particular, the interaction of hydrogen ions and metabolic factors influences the pathways of photoproduct decay in isolated retinas, subsequent to metarhodopsin II. The results and methods described here should be useful in establishing a context in which to study the faster mechanisms involved in photochemical and electrical transduction in photoreceptor cells. In addition, these results may become important in understanding the normal and pathological functionings of the eye.Item New Methods for the Detection and Interception of Unknown, Frequency-Hopped Waveforms(1990) Snelling, William Edward; Geraniotis, Evaggelos; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)Three new methods for the detection and interception of frequency-hopped waveforms are presented. The first method extends the optimal, fixed-block detection method based on the likelihood ratio to a sequential one based on the Sequential Probability Ratio Test (SPRT). The second method is structured around a compressive receiver and is highly efficient yet easily implemented. The third method is based on the new concept of Amplitude Distribution Function (ADF) and results in a detector that is an extension of the radiometer. The first method presents a detector structured to make a decision sequentially, that is, as each data element is collected. Initially, a purely sequential test is derived and shown to require fewer data for a decision. A truncated sequential method is also derived and shown to reduce the data needed for a decision while operating under poor signal-to-noise ratios (SNRs). A detailed performance analysis is presented along with numerical and Monte Carlo analyses of the detectors. The second method assumes stationary, colored Gaussian interference and presents a detailed model of the compressive receiver. A locally optimal detector is developed via the likelihood ratio theory and yields a reference to which previous ad hoc schemes are compared. A simplified, suboptimal scheme is developed that trades off duty cycle for performance, and a technique for estimating hop frequency is developed. The performance of the optimal and suboptimal detectors is quantified. For the suboptimal scheme, the trade-off with duty cycle is studied. The reliability of the hop frequency estimator is bounded and traded off against duty cycle. In the third method, a precise definition of the ADF is given, from which follows a convolutional relationship between the ADFs of signal and additive noise. A technique is given for deconvolving the ADF, with which signal and noise components can be separated. A detection statistic characterized, yielding a framework on which to synthesize a detector. The detector's performance is analyzed and compared with the radiometer.Item Networks for Fast and Efficient Unicast and Multicast Communications(1992) Lee, Ching-Yi; Oruç, A. Yavuz; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, MD)This dissertation presents new results on networks for high-speed unicast and multicast communications which play key roles in communication networks and parallel computer systems. Specifically, (1) we present past parallel algorithms for routing any one-to-one assignment over Beneš network, we propose new multicasting networks that can efficiently realize any one-to-many assignments, and we give an explicit construction of linear-size expanders with very large expansion coefficients. Our parallel routing algorithms for Beneš networks are realized on two different topologies. Using these algorisms, we show that any unicast assignment that involves )(k) pairs of inputs and outputs can be routed through and n-input Beneš network in O(log2 k+lg n) time without pipelining and O(lg k) time with pipelining if the topology is complete, and in O(lg4k+lg2k lg n) time without pipelining and O(lg3 k+lg k lg n) time with pipelining if the topology is extended perfect shuffle. These improve the best-known routing time complexities of parallel algorithms of Lev et al. and Nassimi and Sahni by a factor of O(lg n). Our multicasting networks uses a very simple self-routing scheme which requires no separate computer model for routing. Including the routing cost, it can be constructed with O(n lg2 n) bit-level constant fanin logic gates, O(lg2 n) bit-level depth, and can realize any multicast assignment in O(lg3 n) bit-level time. These complexities match or are better than those of multicasting networks with the same cost that were reported in the literature. In addition to its attractive routing scheme, our multicasting network is input-initiated and can pipeline multicast assignments through itself. With pipelining, the average routing time for O(lg2 n) multicast assignments can be reduced to O(lg n) which is the best among those of the multicasting networks previously reported in the literature. Our linear-size expanders are explicitly constructed by following a traditional design and analysis technique. We construct a family of linear-size with density 33 and expansion coefficient 0.868. This expansion coefficient is the larges among the linear-size expanders that were similarly constructed. Using these expanders, we also report a family of explicitly constructed superconcentrators with density 208.Item Data Acquisition Interface of a VLSI Cochlea Model(1993) Edwards, Thomas G.; Shamma, Shihab; Electrical Engineering; Digital Repository at the University; University of Maryland (College Park, Md)Computer models of cochlear processing take exceedingly long times to run, even for short data sets. A data acquisition system was developed for a new switched-capacitor VLSI cochlea model chip, in order to rapidly perform cochleaI processing on digitzed speech samples. The system is capable of processing very long speech samples. Processing is in near-real-time, though it, takes about 2 minutes per second of speech to write the large amount of data to a hard drive. Software has also been developed to convert the output data into a form readable by the ESPS digital signal processing package from Entropic Speech, Inc.Item Microwave Nonlinearities in Photodiodes(1994) Williams, Keith Jake; Dagenais, Mario; Electrical & Computer Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, MD)The nonlinearities in p-i-n photodiodes have been measured and numerically modeled. Harmonic distortion, response reduction, and sinusoidal output distortion measurements were made with two singlefrequency offset-phased-locked Nd: YAG lasers, which provided a source dynamic range greater than 130 dB, a 1 MHz to 50 GHz frequency range, and optical powers up to 10 mW. A semi-classical approach was used to solve the carrier transport in a one-dimensional p-i-n photodiode structure. This required the simultaneous solution of three coupled nonlinear differential equations: Poisson's equation and the hole and electron continuity equations. Space-charge electric fields, loading in the external circuit, and absorption in undepleted regions next to the intrinsic region all contributed to the nonlinear behavior described by these equations. Numerical simulations were performed to investigate and isolate the various nonlinear mechanisms. It was found that for intrinsic region electric fields below 50 kV/cm, the nonlinearities were influenced primarily by the space-charge electric-field-induced change in hole and electron velocities. Between 50 and 100kV/cm, the nonlinearities were found to be influenced primarily by changes in electron velocity for frequencies above 5 GHz and by p-region absorption below 1 GHz. Above 100 kV/cm, only p-region absorption could explain the observed nonlinear behavior, where only 8 to 14 nm of undepleted absorbing material next to the intrinsic region was necessary to model the observed second harmonic distortions of -60 dBc at 1 mA. Simulations were performed at high power densities to explain the observed response reductions and time distortions. A radially inward component of electron velocity was discovered, and under certain conditions, was estimated to have the same magnitude as the axial velocity. The model was extended to predict that maximum photodiode currents of 50 mA should be possible before a sharp increase in nonlinear output occurs. For capacitively-limited devices, the space-charge-induced nonlinearities were found to be independent of the intrinsic region length, while external circuit loading was determined to cause higher nonlinearities in shorter devices. Simulations indicate that second harmonic improvements of 40 to 60 dB may be possible if the photodiode can be fabricated without undepleted absorbing regions next to the intrinsic region.Item Analysis of Control Strategies for a Human Skeletal System Pedaling a Bicycle(1995) Abbott, Scott Bradley; Levine, William S.; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md)The study of human locomotion has gained more attention recently with the availability of better analytic and computational tools with which to examine it. A subject under much study within the field today is the effort to model human motor control systems using control systems methods. Analytic, computational, and experimental studies of locomotion can produce models that provide further insight into the design and function of human systems, as well as provide directions for research into therapies for muscle and nerve related disorders affecting these systems. This thesis examines how computational methods can be utilized to study the functionality of these systems. Building on past research, dynamic models for a human skeletal system pedaling a bicycle are used as a basis for examining various methods of implementing inputs that will control the cycling. Two models are used – a three degree-of-freedom model implementing ideal torque inputs at the hip, knees, and feet, and a one degree-of-freedom model involving inputs at the hip and knee only. Both models are characterized by highly nonlinear dynamics, requiring the use of nonlinear analysis, optimization theory, and computational methods for examination. Control of the one degree-of-freedom model has been addressed in previous work; here, parameterization of the control and the process of learning it is examined. Next, control strategies for the more complex three degree-of-freedom model are developed. Finally, results for upright and recumbent cycling are compared using the three degree-of-freedom model.Item Composing With Genetic Algorithms(International Computer Music Association, 1995-09) Jacob, BrucePresented is an application of genetic algorithms to the problem of composing music, in which GAs are used to produce a set of data filters that identify acceptable material from the output of a stochastic music generator. The algorithmic composition system variations is described and musical examples of its output are given. Also discussed briefly is the system’s application to microtonal music.Item Magnetic imaging in the presence of external fields: Technique and applications (invited)(American Institute of Physics, 1996-04-15) Gomez, Romel D.; Burke, Edward R.; Mayergoyz, Isaak D.Magnetic force microscopy (MFM) in the presence of an external magnetic field has been developed. This has led to further understanding of image formation in MFM as well as new insights concerning the interaction of magnetic recording media with an external field. Our results confirm that, at low applied fields, image formation results from the interaction of the component by the local surface field along the direction of the probe’s magnetization. By reorienting the probe’s magnetization by an appropriate application of an external field, it is possible to selectively image specific components of the local field. At higher applied fields, the probe becomes saturated and the changes in the images may be attributed to magnetization reversal of the sample. We have observed the transformations that occur at various stages of the dc erasure of thin-film recording media. This technique has also been applied to conventional magneto-optical media to study domain collapse caused by increasing temperature with an external bias field. The methods, results, and their analysis are presented.Item Switching characteristics of submicron cobalt islands(American Institute of Physics, 1996-07-01) Gomez, R. D.; Shih, M. C.; New, R. M. H.; Pease, R. F. W.; White, R. L.The magnetic characteristics of 0.230.430.02 mm3 cobalt islands were investigated using magnetic force microscopy in the presence of an applied field. The islands were noninteracting and showed a wide variety of single and multidomain configurations. The distribution of magnetization directions supports earlier models which suggest that crystalline anisotropy plays a dominant role in establishing a dispersion of easy axis directions about the long axis of the particles. The magnetic evolution, involving rotation and switching of individual islands, was observed at various points along the microscopic magnetization curve. A magnetization curve of an ensemble of islands was derived from the images and compares remarkably well with macroscopic M–H measurements.Item The Trading Function in Action(ACM (Association for Computing Machinery) Publications, 1996-09) Jacob, Bruce; Mudge, TrevorThis paper describes a commercial software and hardware platform for telecommunications and multimedia processing. The software architecture loosely follows the CORBA and ODP standards of distributed computing and supports a number of application types on different hardware configurations. This paper is the result of lessons learned in the process of designing, building, and modifying an industrial telecommunications platform. In particular, the use of the trading function in the design of the system led to such benefits as support for the dynamic evolution of the system, the ability to dynamically add services and data types to a running system, support for heterogeneous systems, and a simple design performing well enough to handle traffic in excess of 40,000 busy-hour calls.Item An Analytical Model for Designing Memory Hierarchies(1996-10) Jacob, Bruce; Chen, Peter M.; Silverman, Seth R.; Mudge, Trevor N.Memory hierarchies have long been studied by many means: system building, trace-driven simulation, and mathematical analysis. Yet little help is available for the system designer wishing to quickly size the different levels in a memory hierarchy to a first-order approximation. In this paper, we present a simple analysis for providing this practical help and some unexpected results and intuition that come out of the analysis. By applying a specific, parametized model of workload locality, we are able to derive a closed-form solution for the optimal size of each hierarchy level. We verify the accuracy of this solution against exhaustive simulation with two case studies: a three-level I/O storage hierarchy and a three-level processor-cache hierarchy. In all but one case, the configuration recommended by the model performs within 5% of optimal. One result of our analysis is that the first place to spend money is the cheapest (rather than the fastest) cache level, particularly with small system budgets. Another is that money spent on an n-level hierarchy is spent in a fixed proportion until another level is added.Item Algorithmic Composition as a Model of Creativity(1996-12) Jacob, Bruce““’There are two distinct types of creativity: the flash out of the blue (inspiration? genius?), and the process of incremental revisions (hard work). Not only are we years away from modeling the former, we do not even begin to understand it. The latter is algorithmic in nature and has been modeled in many systems both musical and non-musical. Algorithmic composition is as old as music composition. It is often considered a cheat, a way out when the composer needs material and/or inspiration. It can also be thought of as a compositional tool that simply makes the composer's work go faster. This article makes a case for algorithmic composition as such a tool. The 'hard work' type of creativity often involves trying many different combinations against each other and choosing one over others. This iterative task seems natural to be expressed as a computer algorithm. The implementation issues can be reduced to two components: how to understand one's own creative process well enough to reproduce it as an algorithm, and how to program a computer to differentiate between 'good' and 'bad' music. The philosophical issues reduce to the question who or what is responsible for the music produced?Item Software-Managed Address Translation(1997-02) Jacob, Bruce; Mudge, TrevorIn this paper we explore software-managed address translation. The purpose of the study is to specify the memory management design for a high clock-rate PowerPC implementation in which a simple design is a prerequisite for a fast clock and a short design cycle. We show that software-managed address translation is just as efficient as hardware- managed address translation, and it is much more flexible. Operating systems such as OSF/1 and Mach charge between 0.10 and 0.28 cycles per instruction (CPI) for address translation using dedicated memory-management hardware. Software-managed translation requires 0.05 CPI. Mechanisms to support such features as shared memory, superpages, sub-page protection, and sparse address spaces can be defined completely in software, allowing much more flexibility than in hardware-defined mechanisms.Item Analytical solution for the side-fringing fields of narrow beveled heads(American Institue of Physics, 1997-04-15) Mayergoyz, I. D.; Madabhushi, R.; Burke, E. R.; Gomez, R. D.By using conical coordinates, exact analytical solutions for three-dimensional side-fringing fields of recording heads that are beveled in the down-track direction are found. These solutions are derived under the assumption of zero gap length. The side-fringing fields for the two limiting cases of infinitesimally narrow heads and semi-infinitely wide heads are presented and compared.