Skip navigation

Theory of Computation

  • Page 2 of 5

The minimum description length (MDL) principle is a powerful method of inductive inference, the basis of statistical modeling, pattern recognition, and machine learning. It holds that the best explanation, given a limited set of observed data, is the one that permits the greatest compression of the data. MDL methods are particularly well-suited for dealing with model selection, prediction, and estimation problems in situations where the models under consideration can be arbitrarily complex, and overfitting the data is a serious concern.

This extensive, step-by-step introduction to the MDL Principle provides a comprehensive reference (with an emphasis on conceptual issues) that is accessible to graduate students and researchers in statistics, pattern classification, machine learning, and data mining, to philosophers interested in the foundations of statistics, and to researchers in other applied sciences that involve model selection, including biology, econometrics, and experimental psychology. Part I provides a basic introduction to MDL and an overview of the concepts in statistics and information theory needed to understand MDL. Part II treats universal coding, the information-theoretic notion on which MDL is built, and part III gives a formal treatment of MDL theory as a theory of inductive inference based on universal coding. Part IV provides a comprehensive overview of the statistical theory of exponential families with an emphasis on their information-theoretic properties. The text includes a number of summaries, paragraphs offering the reader a "fast track" through the material, and boxes highlighting the most important concepts.


Investigating meta-programming within the logic programming paradigm, Meta-Logics and Logic Programming presents original research on an important extension of logic programming that makes it more amenable for knowledge representation and programming in general. The 12 contributions, many written especially for this book, explore the foundations, language design issues, and applications of meta-programming in logic programming.

Meta-programming—the process of writing computer programs that can manipulate representations of other programs—has been key both in the foundations of computer science and in its practical developments. Examples of meta-programs include compilers, interpreters, program analyzers, and partial evaluators. The choice of logic programming as a basis for meta-programming offers several practical and theoretical advantages: among them, the possibility of tackling critical foundational problems of meta-programming within a strong theoretical framework, and the surprising ease of programming. The usual framework of logic programming (and more generally first-order logic), however, has to be modified and extended to formally deal with meta-programs, extensions the editors call "meta-logics." Along with an exploration of meta-programming in logic programming, the definitions, formal properties, and use of these extensions constitute one of the book's main themes.

The first part of the book, Foundations, focuses on the representation problem—how object programs are represented within meta-programs. The second part, Language Support for Meta-Logics, is concerned with language extensions that make meta-programming easier and more elegant. The third part, Meta-Logics for Knowledge Management, deals with the use of meta-logic for advanced knowledge representation purposes.

Although state variable concepts are a part of modern control theory, they have not been extensively applied in communication theory. The purpose of this book is to demonstrate how the concepts and methods of state variables can be used advantageously in analyzing a variety of communication theory problems. In contrast to the impulse response and covariance function description of systems and random processes commonly used in the analysis of communication problems, Professor Baggeroer points out that a state variable approach describes these systems and processes in terms of differential equations and their excitation, which is usually a white-noise process. Theoretically, such a description provides a very general characterization on which a large class of systems, possibly time varying and nonlinear, can be modeled. Practically, the state variable approach often provides a more representative physical description of the actual dynamics of the systems involved and, most importantly, can lead to solution techniques that are readily implemented on a computer and that yield specific numerical results.

The work focuses on how state variables can be used to solve several of the integral equations that recur in communication theory including, for example, the Kahunen-Loeve theorem, the detection of a known signal in the presence of a colored noise, and the Wiener-Hopf equation. The book is divided into two parts. The first part deals with the development from first principles of the state variable solution techniques for homogeneous and inhomogeneous Fredholm integral solutions. The second part considers two specific applications of the author's integral equation theory: to optimal signal design for colored noise channels, and to linear estimation theory.

The main thrust of the material presented in this book is toward finding effective numerical procedures for analyzing complex problems. Professor Baggeroer has combined several different mathematical tools not commonly used together to attack the detection and signal design problems. Numerous examples are presented throughout the book to emphasize the numerical aspects of the author's methods. If the reader is familiar with detection and estimation theory and with deterministic state variable concepts, the ideas, techniques, and results contained in this work will prove highly relevant, if not directly applicable, to a large number of communication theory problems.

MIT Research Monograph No. 61

Randomization is an important tool in the design of algorithms, and the ability of randomization to provide enhanced power is a major research topic in complexity theory. Noam Nisan continues the investigation into the power of randomization and the relationships between randomized and deterministic complexity classes by pursuing the idea of emulating randomness, or pseudorandom generation.

Pseudorandom generators reduce the number of random bits required by randomized algorithms, enable the construction of certain cryptographic protocols, and shed light on the difficulty of simulating randomized algorithms by deterministic ones. The research described here deals with two methods of constructing pseudorandom generators from hard problems and demonstrates some surprising connections between pseudorandom generators and seemingly unrelated topics such as multiparty communication complexity and random oracles.

Nisan first establishes a precise connection between computational complexity and pseudorandom number generation, revealing that efficient deterministic simulation of randomized algorithms is possible under much weaker assumptions than was previously known, and bringing to light new consequences concerning the power of random oracles. Using a remarkable argument based on multiparty communication complexity, Nisan then constructs a generator that is good against all tests computable in logarithmic space. A consequence of this result is a new construction of universal traversal sequences.

Contents: Introduction. Hardness vs. Randomness. Pseudorandom Generators for Logspace and Multiparty Protocols.

A System for Representing and Using Real-World Knowledge

"Consider for a moment the layers of structure and meaning that are attached to concepts like lawsuit, birthday party, fire, mother, walrus, cabbage, or king.... If I tell you that a house burned down, and that the fire started at a child's birthday party, you will think immediately of the candles on the cake and perhaps of the many paper decorations. You will not, In all probability, find yourself thinking about playing pin-the-tail-on-the-donkey or about the color of the cake's icing or about the fact that birthdays come once a year. These concepts are there when you need them, but they do not seem to slow down the search for a link between fires and birthday parties."

The human mind can do many remarkable things. One of the most remarkable Is its ability to store an enormous quantity and variety of knowledge and to locate and retrieve whatever part of it is relevant in a particular context quickly and in most cases almost without effort. "If we are ever to create an artificial intelligence with human-like abilities," Fahlman writes, "we will have to endow it with a comparable knowledge-handling facility; current knowledge-base systems fall far short of this goal. This report describes an approach to the problem of representing and using realworld knowledge in a computer."

The system developed by Fahlman and presented in this book consists of two more-or-less independent parts. The first is the system's parallel network memory scheme: "Knowledge Is stored as a pattern of interconnections of very simple parallel processing elements: node units that can store a dozen or so distinct marker-bits, and link units that can propagate those markers from node to node, in parallel through the network. Using these marker-bit movements, the parallel network system can perform searches and many common deductions very quickly."

The second (and more traditional) part of the knowledge-base system presented here is NETL, "a vocabulary of conventions and processing algorithms—in some sense, a language—for representing various kinds of knowledge as nodes and links in the network.... NETL incorporates a number of representational techniques—new ideas and new combinations of old ideas-which allow it to represent certain real-world concepts more precisely and more efficiently than earlier systems.... NETL has been designed to operate efficiently on the parallel network machine described above, and to exploit this machine's special abilities. Most of the ideas in NETL are applicable to knowledge-base systems on serial machines as well."

New Directions

Classical computationalism—-the view that mental states are computational states—-has come under attack in recent years. Critics claim that in defining computation solely in abstract, syntactic terms, computationalism neglects the real-time, embodied, real-world constraints with which cognitive systems must cope. Instead of abandoning computationalism altogether, however, some researchers are reconsidering it, recognizing that real-world computers, like minds, must deal with issues of embodiment, interaction, physical implementation, and semantics.

This book lays the foundation for a successor notion of computationalism. It covers a broad intellectual range, discussing historic developments of the notions of computation and mechanism in the computationalist model, the role of Turing machines and computational practice in artificial intelligence research, different views of computation and their role in the computational theory of mind, the nature of intentionality, and the origin of language.

Too often, designers of computer systems, both hardware and software, use models and concepts that focus on the artifact while ignoring the context in which the artifact will be used. According to this book, that assumption is a major reason for many of the failures in contemporary computer systems development. It is time for designers and users to join forces in the design of computer systems.

The contributors to this book address both the pragmatic approach of direct collaboration between designers and users (known as participatory design) and the more conceptual approach that incorporates complementary perspectives to help designers come up with better solutions. The volume brings together different computer-related research disciplines, including computer-supported cooperative work (CSCW), human-computer interaction (CHI), and software engineering, as well as social science disciplines concerned with the design and use of computer artifacts.

The book is organized into two parts. The first, "Artifacts and Use," focuses on the context of using computer artifacts. The second, "Process and People," focuses on the context of designing computer artifacts.

Contributors:
Colin Beardon, Jeanette Blomberg, Kristin Braa, Tone Bratteteig, Paul Dourish, Pelle Ehn, Sue Gollifer, Kaj Grønbaek, Peter Holm, Mark C. Jones, Morten Kyng, Jan Ljungberg, Tom McMaster, Theis Meggerle, Anders Mørch, Preben Mogensen, Michael J. Muller, Torbjörn Näslund, Christopher Rose, Odd Steen, Erik Stolterman, Markus Stolze, Lucy Suchman, Tamara Sumner, Micke Svedemar, Kari Thoresen, Randall Trigg, Richard Vidgen, Trevor Wood-Harper, Suzette Worden.


There is increasing interest in genetic programming by both researchers and professional software developers. These twenty-two invited contributions show how a wide variety of problems across disciplines can be solved using this new paradigm.

Advances in Genetic Programming reports significant results in improving the power of genetic programming, presenting techniques that can be employed immediately in the solution of complex problems in many areas, including machine learning and the simulation of autonomous behavior. Popular languages such as C and C++ are used in many of the applications and experiments, illustrating how genetic programming is not restricted to symbolic computing languages such as LISP. Researchers interested in getting started in genetic programming will find information on how to begin, on what public domain code is available, and on how to become part of the active genetic programming community via electronic mail.

A major focus of the book is on improving the power of genetic programming. Experimental results are presented in a variety of areas, including adding memory to genetic programming, using locality and "demes" to maintain evolutionary diversity, avoiding the traps of local optima by using coevolution, using noise to increase generality, and limiting the size of evolved solutions to improve generality.

Significant theoretical results in the understanding of the processes underlying genetic programming are presented, as are several results in the area of automatic function definition. Performance increases are demonstrated by directly evolving machine code, and implementation and design issues for genetic programming in C++ are discussed.


Automated reasoning has matured into one of the most advanced areas of computer science. It is used in many areas of the field, including software and hardware verification, logic and functional programming, formal methods, knowledge representation, deductive databases, and artificial intelligence. This handbook presents an overview of the fundamental ideas, techniques, and methods in automated reasoning and its applications. The material covers both theory and implementation. In addition to traditional topics, the book covers material that bridges the gap between automated reasoning and related areas. Examples include model checking, nonmonotonic reasoning, numerical constraints, description logics, and implementation of declarative programming languages.

The book consists of eight parts. After an overview of the early history of automated deduction, the areas covered are reasoning methods in first-order logic; equality and other built-in theories; methods of automated reasoning using induction; higher-order logic, which is used in a number of automatic and interactive proof-development systems; automated reasoning in nonclassical logics; decidable classes and model building; and implementation-related questions.

Automated reasoning has matured into one of the most advanced areas of computer science. It is used in many areas of the field, including software and hardware verification, logic and functional programming, formal methods, knowledge representation, deductive databases, and artificial intelligence. This handbook presents an overview of the fundamental ideas, techniques, and methods in automated reasoning and its applications. The material covers both theory and implementation. In addition to traditional topics, the book covers material that bridges the gap between automated reasoning and related areas. Examples include model checking, nonmonotonic reasoning, numerical constraints, description logics, and implementation of declarative programming languages.

The book consists of eight parts. After an overview of the early history of automated deduction, the areas covered are reasoning methods in first-order logic; equality and other built-in theories; methods of automated reasoning using induction; higher-order logic, which is used in a number of automatic and interactive proof-development systems; automated reasoning in nonclassical logics; decidable classes and model building; and implementation-related questions.

  • Page 2 of 5