Hans-Peter Graf

Peter Graf is Professor of Psychology at the University of British Columbia.

  • Lifespan Development of Human Memory

    Lifespan Development of Human Memory

    Hans-Peter Graf and Nobuo Ohta

    An original approach to memory development that views memory as a continuous process of growth and loss over the human lifespan rather than as a series of separate periods.

    Until recently, the vast majority of memory research used only university students and other young adults as subjects. Although such research successfully introduced new methodologies and theoretical concepts, it created a bias in our understanding of the lifespan development of memory. This book signals a departure from young-adult-centered research. It views the lifespan development of memory as a continuous process of growth and loss, where each phase of development raises unique questions favoring distinct research methods and theoretical approaches. Drawing on a broad range of investigative strategies, the book lays the foundation for a comprehensive understanding of the lifespan development of human memory.

    Topics include the childhood and adulthood development of working memory, episodic and autobiographical memory, and prospective memory, as well as the breakdown of memory functions in Alzheimer's disease. Of particular interest is the rich diversity of approaches, methods, and theories. The book takes an interdisciplinary perspective, drawing on work from psychology, psychiatry, gerontology, and biochemistry.

    • Hardcover $10.75

Contributor

  • Large-Scale Kernel Machines

    Large-Scale Kernel Machines

    Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston

    Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets.

    Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.

    Contributors Léon Bottou, Yoshua Bengio, Stéphane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gaëlle Loosli, Joaquin Quiñonero-Candela, Carl Edward Rasmussen, Gunnar Rätsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sören Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-Tov

    • Hardcover $50.00