# Digital Signal Processing

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. The conference is interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, vision, speech and signal processing, reinforcement learning and control, implementations, and diverse applications. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented at the 2000 conference.

The annual conference on Neural Information Processing System (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes computer science, neuroscience, statistics, physics, cognitive science, and many branches of engineering, including signal processing and control theory. Only about 30 percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. These proceedings contain all of the papers that were presented.

The annual conference on Neural Information Processing Systems (NIPS) is the flagship conference on neural computation. It draws preeminent academic researchers from around the world and is widely considered to be a showcase conference for new developments in network algorithms and architectures. The broad range of interdisciplinary research areas represented includes neural networks and genetic algorithms, cognitive science, neuroscience and biology, computer science, AI, applied mathematics, physics, and many branches of engineering. Only about 30% of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. All of the papers presented appear in these proceedings.

The past decade has seen greatly increased interaction between theoretical work in neuroscience, cognitive science and information processing, and experimental work requiring sophisticated computational modeling. The 152 contributions in *NIPS 8* focus on a wide variety of algorithms and architectures for both supervised and unsupervised learning. They are divided into nine parts: Cognitive Science, Neuroscience, Theory, Algorithms and Architectures, Implementations, Speech and Signal Processing, Vision, Applications, and Control.

Chapters describe how neuroscientists and cognitive scientists use computational models of neural systems to test hypotheses and generate predictions to guide their work. This work includes models of how networks in the owl brainstem could be trained for complex localization function, how cellular activity may underlie rat navigation, how cholinergic modulation may regulate cortical reorganization, and how damage to parietal cortex may result in neglect.

Additional work concerns development of theoretical techniques important for understanding the dynamics of neural systems, including formation of cortical maps, analysis of recurrent networks, and analysis of self- supervised learning.

Chapters also describe how engineers and computer scientists have approached problems of pattern recognition or speech recognition using computational architectures inspired by the interaction of populations of neurons within the brain. Examples are new neural network models that have been applied to classical problems, including handwritten character recognition and object recognition, and exciting new work that focuses on building electronic hardware modeled after neural systems.

*A Bradford Book*

*Communication Complexity *describes a new intuitive model for studying circuit networks that captures the essence of circuit depth. Although the complexity of boolean functions has been studied for almost 4 decades, the main problems the inability to show a separation of any two classes, or to obtain nontrivial lower bounds remain unsolved. The communication complexity approach provides clues as to where to took for the heart of complexity and also sheds light on how to get around the difficulty of proving lower bounds.

Karchmer's approach looks at a computation device as one that separates the words of a language from the non-words. It views computation in a top down fashion, making explicit the idea that flow of information is a crucial term for understanding computation. Within this new setting, *Communication Complexity *gives simpler proofs to old results and demonstrates the usefulness of the approach by presenting a depth lower bound for *st*-connectivity. Karchmer concludes by proposing open problems which point toward proving a general depth lower bound.

Mauricio Karchmer received his doctorate from Hebrew University and is currently a Postdoctoral Fellow at the University of Toronto. Communication Complexity received the 1988 ACM Doctoral Dissertation Award.

These twenty lectures have been developed and refined by Professor Siebert during the more than two decades he has been teaching introductory Signals and Systems courses at MIT. The lectures are designed to pursue a variety of goals in parallel: to familiarize students with the properties of a fundamental set of analytical tools; to show how these tools can be applied to help understand many important concepts and devices in modern communication and control engineering practice; to explore some of the mathematical issues behind the powers and limitations of these tools; and to begin the development of the vocabulary and grammar, common images and metaphors, of a general language of signal and system theory.

Although broadly organized as a series of lectures, many more topics and examples (as well as a large set of unusual problems and laboratory exercises) are included in the book than would be presented orally. Extensive use is made throughout of knowledge acquired in early courses in elementary electrical and electronic circuits and differential equations.

**Contents:** Review of the "classical" formulation and solution of dynamic equations for simple electrical circuits; The unilateral Laplace transform and its applications; System functions; Poles and zeros; Interconnected systems and feedback; The dynamics of feedback systems; Discrete-time signals and linear difference equations; The unilateral Z-transform and its applications; The unit-sample response and discrete-time convolution; Convolutional representations of continuous-time systems; Impulses and the superposition integral; Frequency-domain methods for general LTI systems; Fourier series; Fourier transforms and Fourier's theorem; Sampling in time and frequency; Filters, real and ideal; Duration, rise-time and bandwidth relationships: The uncertainty principle; Bandpass operations and analog communication systems; Fourier transforms in discrete-time systems; Random Signals; Modern communication systems.

*Circuits, Signals, and Systems* is included in The MIT Press Series in Electrical Engineering and Computer Science, copublished with McGraw-Hill.

The mathematical operation of quantization exists in many communication and control systems. The increasing demand on existing digital facilities, such as communication channels and data storage, can be alleviated by representing the same amount of information with fewer bits at the expense of more sophisticated data processing. In *Estimation and Control with Quantized Measurements*, Dr. Curry examines the two distinct but related problems of state variable estimation and control when the measurements are quantized. Consideration is limited to discrete-time problems, and emphasis is placed on coarsely quantized measurements and linear, possibly time-varying systems.

In addition to examining the development of the fundamental minimum variance or conditional mean estimate, which lays the groundwork for other types of estimates, the author also looks at easier-to-implement approximate nonlinear filters in conjunction with three communication systems, and so the book is not limited to theory alone. Next, the performance of optimum linear estimators is compared with the nonlinear filters.

Along with a new interpretation of the problem of generating estimates from quantized measurements. both optimal and suboptimal stochastic control with quantized measurements are treated for the first time in print by Dr. Curry.

*MIT Research Monograph No. 60*

This collection of papers is the result of a desire to make available reprints of articles on digital signal processing for use in a graduate course offered at MIT. The primary objective was to present reprints in an easily accessible form. At the same time, it appeared that this collection might be useful for a wider audience, and consequently it was decided to reproduce the articles (originally published between 1965 and 1969) in book form.

The literature in this area is extensive, as evidenced by the bibliography included at the end of this collection. The articles were selected and the introduction prepared by the editor in collaboration with Bernard Gold and Charles M. Rader.

The collection of articles divides roughly into four major categories: z-transform theory and digital filter design, the effects of finite word length, the fast Fourier transform and spectral analysis, and hardware considerations in the implementation of digital filters.