Sergei Nirenburg

Sergei Nirenburg is on the faculty of the Cognitive Science Department at Rensselaer Polytechnic Institute.

  • Linguistics for the Age of AI

    Marjorie McShane and Sergei Nirenburg

    A human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems.

    One of the original goals of artificial intelligence research was to endow intelligent agents with human-level natural language capabilities. Recent AI research, however, has focused on applying statistical and machine learning approaches to big data rather than attempting to model what people do and how they do it. In this book, Marjorie McShane and Sergei Nirenburg return to the original goal of recreating human-level intelligence in a machine They present a human-inspired, linguistically sophisticated model of language understanding for intelligent agent systems that emphasizes meaning—the deep, context-sensitive meaning that a person derives from spoken or written language.

    With Linguistics for the Age of AI, McShane and Nirenburg offer a roadmap for creating language-endowed intelligent agents (LEIAs) that can understand, explain, and learn. They describe the language-understanding capabilities of LEIAs from the perspectives of cognitive modeling and system building, emphasizing “actionability”—which involves achieving interpretations that are sufficiently deep, precise, and confident to support reasoning about action. After detailing their microtheories for topics such as semantic analysis, basic coreference, and situational reasoning, McShane and Nirenburg turn to agent applications developed using those microtheories and evaluations of a LEIA's language understanding capabilities.

    McShane and Nirenberg argue that the only way to achieve human-level language understanding by machines is to place linguistics front and center, using statistics and big data as contributing resources. They lay out a long-term research program that addresses linguistics and real-world reasoning together, within a comprehensive cognitive architecture.

    • Hardcover $75.00
  • Ontological Semantics

    Ontological Semantics

    Sergei Nirenburg and Victor Raskin

    A comprehensive theory-based approach to the treatment of text meaning in natural language processing applications.

    In Ontological Semantics, Sergei Nirenburg and Victor Raskin introduce a comprehensive approach to the treatment of text meaning by computer. Arguing that being able to use meaning is crucial to the success of natural language processing (NLP) applications, they depart from the ad hoc approach to meaning taken by much of the NLP community and propose theory-based semantic methods. Ontological semantics, an integrated complex of theories, methodologies, descriptions, and implementations, attempts to systematize ideas about both semantic description as representation and manipulation of meaning by computer programs. It is built on already coordinated "microtheories" covering such diverse areas as specific language phenomena, processing heuristics, and implementation system architecture rather than on isolated components requiring future integration. Ontological semantics is constantly evolving, driven by the need to make meaning manipulation tasks such as text analysis and text generation work. Nirenburg and Raskin have therefore developed a set of heterogeneous methods suited to a particular task and coordinated at the level of knowledge acquisition and runtime system architecture implementations, a methodology that also allows for a variable level of automation in all its processes.Nirenburg and Raskin first discuss ontological semantics in relation to other fields, including cognitive science and the AI paradigm, the philosophy of science, linguistic semantics and the philosophy of language, computational lexical semantics, and studies in formal ontology. They then describe the content of ontological semantics, discussing text-meaning representation, static knowledge sources (including the ontology, the fact repository, and the lexicon), the processes involved in text analysis, and the acquisition of static knowledge.

    • Hardcover $11.75
    • Paperback $40.00
  • Readings in Machine Translation

    Readings in Machine Translation

    Sergei Nirenburg, Harold L. Somers, and Yorick A. Wilks

    A collection of historically significant articles on machine translation, from its beginnings through the early 1990s.

    The field of machine translation (MT)—the automation of translation between human languages—has existed for more than fifty years. MT helped to usher in the field of computational linguistics and has influenced methods and applications in knowledge representation, information theory, and mathematical statistics.

    This valuable resource offers the most historically significant English-language articles on MT. The book is organized in three sections. The historical section contains articles from MT's beginnings through the late 1960s. The second section, on theoretical and methodological issues, covers sublanguage and controlled input, the role of humans in machine-aided translation, the impact of certain linguistic approaches, the transfer versus interlingua question, and the representation of meaning and knowledge. The third section, on system design, covers knowledge-based, statistical, and example-based approaches to multilevel analysis and representation, as well as computational issues.

    • Hardcover $58.00