Modern, complex digital systems invariably include hardware-implemented finite state machines. The correct design of such parts is crucial for attaining proper system performance. This book offers detailed, comprehensive coverage of the theory and design for any category of hardware-implemented finite state machines. It describes crucial design problems that lead to incorrect or far from optimal implementation and provides examples of finite state machines developed in both VHDL and SystemVerilog (the successor of Verilog) hardware description languages.
Important features include: extensive review of design practices for sequential digital circuits; a new division of all state machines into three hardware-based categories, encompassing all possible situations, with numerous practical examples provided in all three categories; the presentation of complete designs, with detailed VHDL and SystemVerilog codes, comments, and simulation results, all tested in FPGA devices; and exercise examples, all of which can be synthesized, simulated, and physically implemented in FPGA boards. Additional material is available on the book’s Website.
Designing a state machine in hardware is more complex than designing it in software. Although interest in hardware for finite state machines has grown dramatically in recent years, there is no comprehensive treatment of the subject. This book offers the most detailed coverage of finite state machines available. It will be essential for industrial designers of digital systems and for students of electrical engineering and computer science.
This text offers a comprehensive treatment of VHDL and its applications to the design and simulation of real, industry-standard circuits. It focuses on the use of VHDL rather than solely on the language, showing why and how certain types of circuits are inferred from the language constructs and how any of the four simulation categories can be implemented. It makes a rigorous distinction between VHDL for synthesis and VHDL for simulation. The VHDL codes in all design examples are complete, and circuit diagrams, physical synthesis in FPGAs, simulation results, and explanatory comments are included with the designs. The text reviews fundamental concepts of digital electronics and design and includes a series of appendixes that offer tutorials on important design tools including ISE, Quartus II, and ModelSim, as well as descriptions of programmable logic devices in which the designs are implemented, the DE2 development board, standard VHDL packages, and other features. All four VHDL editions (1987, 1993, 2002, and 2008) are covered. This expanded second edition is the first textbook on VHDL to include a detailed analysis of circuit simulation with VHDL testbenches in all four categories (nonautomated, fully automated, functional, and timing simulations), accompanied by complete practical examples. Chapters 1–9 have been updated, with new design examples and new details on such topics as data types and code statements. Chapter 10 is entirely new and deals exclusively with simulation. Chapters 11–17 are also entirely new, presenting extended and advanced designs with theoretical and practical coverage of serial data communications circuits, video circuits, and other topics. There are many more illustrations, and the exercises have been updated and their number more than doubled.
Downloadable instructor resources available for this title: solutions and slides
Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).
The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation. It draws a diverse group of attendees—physicists, neuroscientists, mathematicians, statisticians, and computer scientists. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning and control, emerging technologies, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2005 meeting, held in Vancouver.
This book addresses a fundamental software engineering issue, applying formal techniques and rigorous analysis to a practical problem of great current interest: the incorporation of language-specific knowledge in interactive programming environments. It makes a basic contribution in this area by proposing an attribute-grammar framework for incremental semantic analysis and establishing its algorithmic foundations. The results are theoretically important while having immediate practical utility for implementing environment-generating systems.
The book's principal technical results include: an optimal-time algorithm to incrementally maintain a consistent attributed-tree of attribute grammar subclasses, allowing an optimizing environment-generator to select the most efficient applicable algorithm; a general method for sharing storage among attributes whose values are complex data structures; and two algorithms that carry out attribute evaluation while reducing the number of intermediate attribute values retained. While others have worked on this last problem, Reps's algorithms are the first to achieve sublinear worst-case behavior. One algorithm is optimal, achieving the log n lower space bound in nonlinear time, while the second algorithm uses as much as root n space but runs in linear time.
An exploration of the techniques for analyzing the behavior of one- and two-dimensional iterative networks formed of discrete, or logical elements, showing that most questions about the behavior of iterative systems are recursively undecidable.
Although state variable concepts are a part of modern control theory, they have not been extensively applied in communication theory. The purpose of this book is to demonstrate how the concepts and methods of state variables can be used advantageously in analyzing a variety of communication theory problems. In contrast to the impulse response and covariance function description of systems and random processes commonly used in the analysis of communication problems, Professor Baggeroer points out that a state variable approach describes these systems and processes in terms of differential equations and their excitation, which is usually a white-noise process. Theoretically, such a description provides a very general characterization on which a large class of systems, possibly time varying and nonlinear, can be modeled. Practically, the state variable approach often provides a more representative physical description of the actual dynamics of the systems involved and, most importantly, can lead to solution techniques that are readily implemented on a computer and that yield specific numerical results.
The work focuses on how state variables can be used to solve several of the integral equations that recur in communication theory including, for example, the Kahunen-Loeve theorem, the detection of a known signal in the presence of a colored noise, and the Wiener-Hopf equation. The book is divided into two parts. The first part deals with the development from first principles of the state variable solution techniques for homogeneous and inhomogeneous Fredholm integral solutions. The second part considers two specific applications of the author's integral equation theory: to optimal signal design for colored noise channels, and to linear estimation theory.
The main thrust of the material presented in this book is toward finding effective numerical procedures for analyzing complex problems. Professor Baggeroer has combined several different mathematical tools not commonly used together to attack the detection and signal design problems. Numerous examples are presented throughout the book to emphasize the numerical aspects of the author's methods. If the reader is familiar with detection and estimation theory and with deterministic state variable concepts, the ideas, techniques, and results contained in this work will prove highly relevant, if not directly applicable, to a large number of communication theory problems.
In recent years, many approximate methods have been developed for analyzing queueing models of complex computer systems. These ad hoc methods usually focus on specific aspects of system operation, and appear to be different from one another, making it difficult to see the underlying principles of model development, to understand the relationship between different models of the same system, or to apply the existing methods to new situations. This book presents the first systematic study of approximation methods in queueing network modeling and of the way these methods are developed.
Metamodeling identifies the underlying modeling process and provides tools and techniques for model development that will allow students and researchers to sort through the many different methods, understand them, and apply them to new problems. Using the metamodeling characterization, the book surveys and classifies a large number of approximation methods, catalogs a large number of useful model transformations, characterizes iterative solution procedures and gives theorems for proofs of convergence. This work has led to several other significant results, most notably: an approximation that works well for systems containing semaphores that serialize processes; and the discovery of multiple stable operating points for systems in which there are processes at several priority levels.
Contents: Queueing Network Models of Computer Systems; The Structure of the Modeling Process; Behavior Sequence Transformations and Models with Shadow Servers; State Space Transformations; Consistent Solution, Iteration, and Convergence.
This book inaugurates the MIT Press series in Computer Systems (Research Reports and Notes), edited by Herb Schwetman.
Speed-independent circuits offer a potential solution to the timing problems of VLSI. In this book David Dill develops and implements a theory for practical automatic verification of these control circuits. He describes a formal model of circuit operation, defines the proper relationship between an implementation and its specification, and constructs a computer program that can check this relationship.
Asynchronous or speed-independent circuit design has gained renewed interest in the VLSI community because of the possibilities it provides for dealing with problems that arise with the increasing complexity of VLSI circuits. Speed-independent circuits offer a way around such phenomena as clock skew, which can be a serious obstacle in the design of large systems. They can expedite circuit design by reducing design time and simplifying the overall process.
A major challenge to the successful utilization of speed-independent circuits is correctness. The verification method described here insures that a design is correct and because it can be automated it is a significant advantage over manual verification. Dill proposes two distinct theories - prefix-closed trace structures, which can model and specify safety properties, and complete trace structures, which can also deal with liveness and fairness properties.
David L. Dill received his doctorate from Carnegie Mellon University and is Assistant Professor in the Computer Science Department at Stanford University. Trace Theory for Automatic Hierarchical Verification of Speed Independent Circuits is a 1988 ACM Distinguished Dissertation
Neuromorphic engineers work to improve the performance of artificial systems through the development of chips and systems that process information collectively using primarily analog circuits. This book presents the central concepts required for the creative and successful design of analog VLSI circuits. The discussion is weighted toward novel circuits that emulate natural signal processing. Unlike most circuits in commercial or industrial applications, these circuits operate mainly in the subthreshold or weak inversion region. Moreover, their functionality is not limited to linear operations, but also encompasses many interesting nonlinear operations similar to those occurring in natural systems. Topics include device physics, linear and nonlinear circuit forms, translinear circuits, photodetectors, floating-gate devices, noise analysis, and process technology.