Skip navigation

Theory of Computation

We now know that there is much more to classical mechanics than previously suspected. Derivations of the equations of motion, the focus of traditional presentations of mechanics, are just the beginning. This innovative textbook, now in its second edition, concentrates on developing general methods for studying the behavior of classical systems, whether or not they have a symbolic solution. It focuses on the phenomenon of motion and makes extensive use of computer simulation in its explorations of the topic. It weaves recent discoveries in nonlinear dynamics throughout the text, rather than presenting them as an afterthought. Explorations of phenomena such as the transition to chaos, nonlinear resonances, and resonance overlap to help the student develop appropriate analytic tools for understanding. The book uses computation to constrain notation, to capture and formalize methods, and for simulation and symbolic analysis. The requirement that the computer be able to interpret any expression provides the student with strict and immediate feedback about whether an expression is correctly formulated.

This second edition has been updated throughout, with revisions that reflect insights gained by the authors from using the text every year at MIT. In addition, because of substantial software improvements, this edition provides algebraic proofs of more generality than those in the previous edition; this improvement permeates the new edition.

Computing is usually viewed as a technology field that advances at the breakneck speed of Moore’s Law. If we turn away even for a moment, we might miss a game-changing technological breakthrough or an earthshaking theoretical development. This book takes a different perspective, presenting computing as a science governed by fundamental principles that span all technologies. Computer science is a science of information processes. We need a new language to describe the science, and in this book Peter Denning and Craig Martell offer the great principles framework as just such a language. This is a book about the whole of computing—its algorithms, architectures, and designs.

Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design. They begin with an introduction to computing, its history, its many interactions with other fields, its domains of practice, and the structure of the great principles framework. They go on to examine the great principles in different areas: information, machines, programming, computation, memory, parallelism, queueing, and design. Finally, they apply the great principles to networking, the Internet in particular.

Great Principles of Computing will be essential reading for professionals in science and engineering fields with a “computational” branch, for practitioners in computing who want overviews of less familiar areas of computer science, and for non-computer science majors who want an accessible entry way to the field.

A Primer

This introduction to quantum algorithms is concise but comprehensive, covering many key algorithms. It is mathematically rigorous but requires minimal background and assumes no knowledge of quantum theory or quantum mechanics. The book explains quantum computation in terms of elementary linear algebra; it assumes the reader will have some familiarity with vectors, matrices, and their basic properties, but offers a review of all the relevant material from linear algebra. By emphasizing computation and algorithms rather than physics, this primer makes quantum algorithms accessible to students and researchers in computer science without the complications of quantum mechanical notation, physical concepts, and philosophical issues.

After explaining the development of quantum operations and computations based on linear algebra, the book presents the major quantum algorithms, from seminal algorithms by Deutsch, Jozsa, and Simon through Shor’s and Grover’s algorithms to recent quantum walks. It covers quantum gates, computational complexity, and some graph theory. Mathematical proofs are generally short and straightforward; quantum circuits and gates are used to illuminate linear algebra; and the discussion of complexity is anchored in computational problems rather than machine models.

Quantum Algorithms via Linear Algebra is suitable for classroom use or as a reference for computer scientists and mathematicians.

Category theory was invented in the 1940s to unify and synthesize different areas in mathematics, and it has proven remarkably successful in enabling powerful communication between disparate fields and subfields within mathematics. This book shows that category theory can be useful outside of mathematics as a rigorous, flexible, and coherent modeling language throughout the sciences. Information is inherently dynamic; the same ideas can be organized and reorganized in countless ways, and the ability to translate between such organizational structures is becoming increasingly important in the sciences. Category theory offers a unifying framework for information modeling that can facilitate the translation of knowledge between disciplines.

Written in an engaging and straightforward style, and assuming little background in mathematics, the book is rigorous but accessible to non-mathematicians. Using databases as an entry to category theory, it begins with sets and functions, then introduces the reader to notions that are fundamental in mathematics: monoids, groups, orders, and graphs—categories in disguise. After explaining the “big three” concepts of category theory—categories, functors, and natural transformations—the book covers other topics, including limits, colimits, functor categories, sheaves, monads, and operads. The book explains category theory by examples and exercises rather than focusing on theorems and proofs. It includes more than 300 exercises, with solutions.

Category Theory for the Sciences is intended to create a bridge between the vast array of mathematical concepts used by mathematicians and the models and frameworks of such scientific disciplines as computation, neuroscience, and physics.

Downloadable instructor resources available for this title: 193 exercises, separate from those included in the book, with solutions

An Intuitive Approach

This book offers students and researchers a guide to distributed algorithms that emphasizes examples and exercises rather than the intricacies of mathematical models. It avoids mathematical argumentation, often a stumbling block for students, teaching algorithmic thought rather than proofs and logic. This approach allows the student to learn a large number of algorithms within a relatively short span of time. Algorithms are explained through brief, informal descriptions, illuminating examples, and practical exercises. The examples and exercises allow readers to understand algorithms intuitively and from different perspectives. Proof sketches, arguing the correctness of an algorithm or explaining the idea behind fundamental results, are also included. An appendix offers pseudocode descriptions of many algorithms.

Distributed algorithms are performed by a collection of computers that send messages to each other or by multiple software threads that use the same shared memory. The algorithms presented in the book are for the most part “classics,” selected because they shed light on the algorithmic design of distributed systems or on key issues in distributed computing and concurrent programming.

Distributed Algorithms can be used in courses for upper-level undergraduates or graduate students in computer science, or as a reference for researchers in the field.

Downloadable instructor resources available for this title: solution manual and slides

Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “Divide-and-Conquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edge-based flow in the material on flow networks. Many new exercises and problems have been added for this edition. As of the third edition, this textbook is published exclusively by the MIT Press.

Downloadable instructor resources available for this title: instructor’s manual, file of figures in the book, and pseudocode

Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, or request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.

The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

Downloadable instructor resources available for this title: solution manual

Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation

"Simulation," writes Gary Flake in his preface, "becomes a form of experimentation in a universe of theories. The primary purpose of this book is to celebrate this fact."In this book, Gary William Flake develops in depth the simple idea that recurrent rules can produce rich and complicated behaviors. Distinguishing "agents" (e.g., molecules, cells, animals, and species) from their interactions (e.g., chemical reactions, immune system responses, sexual reproduction, and evolution), Flake argues that it is the computational properties of interactions that account for much of what we think of as "beautiful" and "interesting." From this basic thesis, Flake explores what he considers to be today's four most interesting computational topics: fractals, chaos, complex systems, and adaptation.

Each of the book's parts can be read independently, enabling even the casual reader to understand and work with the basic equations and programs. Yet the parts are bound together by the theme of the computer as a laboratory and a metaphor for understanding the universe. The inspired reader will experiment further with the ideas presented to create fractal landscapes, chaotic systems, artificial life forms, genetic algorithms, and artificial neural networks.