This major collection of short essays reviews the scope and progress of research in artificial intelligence over the past two decades. Seminal and most-cited papers from the journal Artificial Intelligence are revisited by the authors who describe how their research has been developed, both by themselves and by others, since the journals first publication.
Constraint-based reasoning is an important area of automated reasoning in artificial intelligence, with many applications. These include configuration and design problems, planning and scheduling, temporal and spatial reasoning, defeasible and causal reasoning, machine vision and language understanding, qualitative and diagnostic reasoning, and expert systems. Constraint-Based Reasoning presents current work in the field at several levels: theory, algorithms, languages, applications, and hardware.
Growing interest in symbolic representation and reasoning has pushed this backstage activity into the spotlight as a clearly identifiable and technically rich subfield in artificial intelligence. This collection of extended versions of 12 papers from the First International Conference on Principles of Knowledge Representation and Reasoning provides a snapshot of the best current work in AI on formal methods and principles of representation and reasoning. The topics range from temporal reasoning to default reasoning to representations for natural language.
Have the classical methods and ideas of AI outlived their usefulness? Foundations of Artificial Intelligence critically evaluates the fundamental assumptions underpinning the dominant approaches to AI. In the 11 contributions, theorists historically associated with each position identify the basic tenets of their position. They discuss the underlying principles, describe the natural types of problems and tasks in which their approach succeeds, explain where its power comes from, and what its scope and limits are.
The six contributions in Connectionist Symbol Processing address the current tension within the artificial intelligence community between advocates of powerful symbolic representations that lack efficient learning procedures and advocates of relatively simple learning procedures that lack the ability to represent complex structures effectively. The authors seek to extend the representational power of connectionist networks without abandoning the automatic learning that makes these networks interesting.
New perspectives and techniques are shaping the field of computer-aided instruction. These essays explore cognitively oriented empirical trials that use AI programming as a modeling methodology and that can provide valuable insight into a variety of learning problems. Drawing on work in cognitive theory, plan-based program recognition, qualitative reasoning, and cognitive models of learning and teaching, this exciting research covers a wide range of alternatives to tutoring dialogues.William J. Clancey is Senior Research Scientist at the Institute for Research on Learning, Palo Alto.
Having played a central role at the inception of artificial intelligence research, machine learning has recently reemerged as a major area of study at the very core of the subject. Solid theoretical foundations are being constructed. Machine learning methods are being integrated with powerful performance systems, and practical applications; based on established techniques are emerging.