The consideration of time or dynamics is fundamental for all aspects of mental activity--perception, cognition, and emotion--because the main feature of brain activity is the continuous change of the underlying brain states even in a constant environment. The application of nonlinear dynamics to the study of brain activity began to flourish in the 1990s when combined with empirical observations from modern morphological and physiological observations. This book offers perspectives on brain dynamics that draw on the latest advances in research in the field.
In Biological Learning and Control, Reza Shadmehr and Sandro Mussa-Ivaldi present a theoretical framework for understanding the regularity of the brain’s perceptions, its reactions to sensory stimuli, and its control of movements. They offer an account of perception as the combination of prediction and observation: the brain builds internal models that describe what should happen and then combines this prediction with reports from the sensory system to form a belief.
Over the past sixty years, powerful methods of model-based control engineering have been responsible for such dramatic advances in engineering systems as autolanding aircraft, autonomous vehicles, and even weather forecasting. Over those same decades, our models of the nervous system have evolved from single-cell membranes to neuronal networks to large-scale models of the human brain. Yet until recently control theory was completely inapplicable to the types of nonlinear models being developed in neuroscience.
Vision is a massively parallel computational process, in which the retinal image is transformed over a sequence of stages so as to emphasize behaviorally relevant information (such as object category and identity) and deemphasize other information (such as viewpoint and lighting). The processes behind vision operate by concurrent computation and message passing among neurons within a visual area and between different areas.
This book offers an introduction to current methods in computational modeling in neuroscience. The book describes realistic modeling methods at levels of complexity ranging from molecular interactions to large neural networks. A "how to" book rather than an analytical account, it focuses on the presentation of methodological approaches, including the selection of the appropriate method and its potential pitfalls.
A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data.
In order to model neuronal behavior or to interpret the results of modeling studies, neuroscientists must call upon methods of nonlinear dynamics. This book offers an introduction to nonlinear dynamical systems theory for researchers and graduate students in neuroscience. It also provides an overview of neuroscience for mathematicians who want to learn the basic facts of electrophysiology.
Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.
Neuroscience involves the study of the nervous system, and its topics range from genetics to inferential reasoning. At its heart, however, lies a search for understanding how the environment affects the nervous system and how the nervous system, in turn, empowers us to interact with and alter our environment. This empowerment requires motor learning. The Computational Neurobiology of Reaching and Pointing addresses the neural mechanisms of one important form of motor learning.
For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems.
This timely overview and synthesis of recent work in both artificial neural networks and neurobiology seeks to examine neurobiological data from a network perspective and to encourage neuroscientists to participate in constructing the next generation of neural networks. Individual chapters were commissioned from selected authors to bridge the gap between present neural network models and the needs of neurophysiologists who are trying to use these models as part of their research on how the brain works.
Wilfrid Rall was a pioneer in establishing the integrative functions of neuronal dendrites that have provided a foundation for neurobiology in general and computational neuroscience in particular. This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists.
Large-Scale Neuronal Theories of the Brain brings together thirteen original contributions by some of the top scientists working in neuroscience today. It presents models and theories that will most likely shape and influence the way we think about the brain, the mind, and interactions between the two in the years to come. Chapters consider global theories of the brain from the bottom up—providing theories that are based on real nerve cells, their firing properties, and their anatomical connections.
This book provides an overview of self-organizing map formation, including recent developments. Self-organizing maps form a branch of unsupervised learning, which is the study of what can be determined about the statistical properties of input data without explicit feedback from a teacher. The articles are drawn from the journal Neural Computation.
Graphical models use graphs to represent and manipulate joint probability distributions. They have their roots in artificial intelligence, statistics, and neural networks. The clean mathematical formalism of the graphical models framework makes it possible to understand a wide variety of network-based approaches to computation, and in particular to understand many neural network algorithms and architectures as instances of a broader probabilistic methodology.
This text provides an introduction to computational aspects of early vision, in particular, color, stereo, and visual navigation. It integrates approaches from psychophysics and quantitative neurobiology, as well as theories and algorithms from machine vision and photogrammetry. When presenting mathematical material, it uses detailed verbal descriptions and illustrations to clarify complex points. The text is suitable for upper-level students in neuroscience, biology, and psychology who have basic mathematical skills and are interested in studying the mathematical modeling of perception.
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computations collects, by topic, the most significant papers that have appeared in the journal over the past nine years.
What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.
Over the past few years, computer modeling has become more prevalent in the clinical sciences as an alternative to traditional symbol-processing models. This book provides an introduction to the neural network modeling of complex cognitive and neuropsychological processes. It is intended to make the neural network approach accessible to practicing neuropsychologists, psychologists, neurologists, and psychiatrists. It will also be a useful resource for computer scientists, mathematicians, and interdisciplinary cognitive neuroscientists.
Much research focuses on the question of how information is processed in nervous systems, from the level of individual ionic channels to large-scale neuronal networks, and from "simple" animals such as sea slugs and flies to cats and primates. New interdisciplinary methodologies combine a bottom-up experimental methodology with the more top-down-driven computational and modeling approach. This book serves as a handbook of computational methods and techniques for modeling the functional properties of single and groups of nerve cells.
Recent years have seen a remarkable expansion of knowledge about the anatomical organization of the part of the brain known as the basal ganglia, the signal processing that occurs in these structures, and the many relations both to molecular mechanisms and to cognitive functions. This book brings together the biology and computational features of the basal ganglia and their related cortical areas along with select examples of how this knowledge can be integrated into neural network models.
This introduction to the crustacean stomatogastric nervous system (STNS) describes some of the best-understood neural networks in the animal kingdom at cellular, network, behavioral, comparative, and evolutionary levels of analysis.