The studies in Weaving a Lexicon make a significant contribution to the growing field of lexical acquisition by considering the multidimensional way in which infants and children acquire the lexicon of their native language. They examine the many strands of knowledge and skill—including perceptual sensitivities, conceptual and semantic constraints, and communicative intent— that children must weave together in the process of word learning, and show the different mix of these factors used at different developmental points. In considering the many different factors at work, the contributors avoid both the "either-or" approach, which singles out one strand to explain word learning throughout childhood, and the "all-inclusive" approach, which considers the melange of factors together. Their goal is to discover precisely which strands of ability or understanding make which contributions to acquisition at which points in infancy and childhood.
The nineteen chapters are arranged in two broadly thematic sections. The chapters in "Initial Acquisitions," focus on issues involved in word learning during infancy, including how learners represent the sound patterns of words, infants' use of action knowledge to understand the meaning of words, and the links between early word learning and conceptual organization. In "Later Acquisitions," the chapters treat topics concerning the stages of toddler and preschooler language acquisition, including part-of- speech information in word learning, the proper-count distinction, and a comparison of verb acquisition in English and Spanish. Because the contributors present their work in the broader context of the interconnection of different processes in lexical acquisition, the chapters in Weaving a Lexicon should suggest new directions for research in the field.
A massive reference work on the scale of MITECS (The MIT Encyclopedia of Cognitive Sciences), The MIT Encyclopedia of Communication Disorders will become the standard reference in this field for both research and clinical use. It offers almost 200 detailed entries, covering the entire range of communication and speech disorders in children and adults, from basic science to clinical diagnosis.MITECD is divided into four sections that reflect the standard categories within the field (also known as speech-language pathology and audiology): Voice, Speech, Language, and Hearing. Within each category, entries are organized into three subsections: Basic Science, Disorders, and Clinical Management. Basic Science includes relevant information on normal anatomy and physiology, physics, psychology and psychophysics, and linguistics; this provides a scientific foundation for entries in the other subsections. The entries that appear under Disorders offer information on the definition and characterization of specific disorders, and tools for their identification and assessment. The Clinical Management subsection describes appropriate interventions, including behavioral, pharmacological, surgical, and prosthetic.Because the approach to communication disorders can be quite different for children and adults, many topics include separate entries reflecting this. Although some disorders that are first diagnosed in childhood may persist in some form throughout adulthood, many disorders can have an onset in either childhood or adulthood, and the timing of onset can have many implications for both assessment and intervention.Topics covered in MITECD include cochlear implants for children and adults, pitch perception, tinnitus, alaryngeal voice and speech rehabilitation, neural mechanisms of vocalization, holistic voice therapy techniques, computer-based approaches to children?s speech and language disorders, neurogenic mutism, regional dialect, agrammatism, global aphasia, and psychosocial problems associated with communicative disorders.
This book can be read on two levels: as a novel empirical study of wh- interrogatives and relative constructions in a variety of languages and as a theoretical investigation of chain formation in grammar.
The book is divided into two parts. Part I investigates the distribution and interpretation of multiple wh- interrogative constructions, focusing on the workings of Superiority. Part II investigates the structure and derivation of relative constructions. The main languages discussed are Lebanese, Arabic, Chinese, and English. The theoretical materials are in the generative grammar tradition.
This is a representative collection of the work of one of the world's leading scholars in the area of speech acoustics. It follows the development over the past 15 years of research presented in the author's previous publications on speech analysis, feature theory, and applications to language descriptions. Most of the articles have had very restricted distribution—many appearing only in the Quarterly Progress Reports issued by Dr. Fant's laboratory.
The first part of the book covers manifold aspects of speech analysis such as instrumental techniques, spectrum data, formant statistics with an emphasis on Swedish vowels and stops, speaker dependencies, normalization procedures, production theory, and coarticulation. The second part analyzes established feature systems and suggests revisions and general discussions on the concept of distinctive features, perception, and the applicability of feature theory to automatic speech recognition. Articles in this part of the book are especially valuable because they represent Dr. Fant's work on "inherent" features and their phonetic correlates as it has evolved since his collaboration in 1952 with Roman Jakobson and Morris Halle.
Speech Sounds and Features is the fourth volume in the series Current Studies in Linguistics.
One of the most important and controversial topics in the field of visual attention is the nature of the units of attentional selection. Traditional models have characterized attention in spatial terms, as a "spotlight" that moves around the visual field, applying processing resources to whatever falls within that spatial region. Recent models of attention, in contrast, suggest that in some cases the underlying units of selection are discrete visual objects and that attention may be limited by the number of objects that can be simultaneously selected.
Objects and Attention explores the idea that attention and objecthood are intimately and importantly related. In addition to reviewing the evidence for object-based attention and exploring what can "count" as an object of attention, it examines how such issues relate to other sensory modalities, such as auditory objects of attention, and to other areas of cognitive science, such as the infant’s object concept. The book has applications to work in experimental cognitive psychology, neuropsychology and cognitive neuroscience, philosophy of mind, developmental psychology, computer modeling, and the psychology of audition.
How do children learn that the word "dog" refers not to all four-legged animals, and not just to Ralph, but to all members of a particular species? How do they learn the meanings of verbs like "think," adjectives like "good," and words for abstract entities such as "mortgage" and "story"? The acquisition of word meaning is one of the fundamental issues in the study of mind.
According to Paul Bloom, children learn words through sophisticated cognitive abilities that exist for other purposes. These include the ability to infer others' intentions, the ability to acquire concepts, an appreciation of syntactic structure, and certain general learning and memory abilities. Although other researchers have associated word learning with some of these capacities, Bloom is the first to show how a complete explanation requires all of them. The acquisition of even simple nouns requires rich conceptual, social, and linguistic capacities interacting in complex ways.
This book requires no background in psychology or linguistics and is written in a clear, engaging style. Topics include the effects of language on spatial reasoning, the origin of essentialist beliefs, and the young child's understanding of representational art. The book should appeal to general readers interested in language and cognition as well as to researchers in the field.
A machine for language? Certainly, say the neurophysiologists, busy studying the language specializations of the human brain and trying to identify their evolutionary antecedents. Linguists such as Noam Chomsky talk about machinelike "modules" in the brain for syntax, arguing that language is more an instinct (a complex behavior triggered by simple environmental stimuli) than an acquired skill like riding a bicycle.
But structured language presents the same evolutionary problems as feathered forelimbs for flight: you need a lot of specializations to fly even a little bit. How do you get them, if evolution has no foresight and the intermediate stages do not have intermediate payoffs? Some say that the Darwinian scheme for gradual species self-improvement cannot explain our most valued human capability, the one that sets us so far above the apes, language itself.
William Calvin and Derek Bickerton suggest that other evolutionary developments, not directly related to language, allowed language to evolve in a way that eventually promoted a Chomskian syntax. They compare these intermediate behaviors to the curb-cuts originally intended for wheelchair users. Their usefulness was soon discovered by users of strollers, shopping carts, rollerblades, and so on. The authors argue that reciprocal altruism and ballistic movement planning were "curb-cuts" that indirectly promoted the formation of structured language. Written in the form of a dialogue set in Bellagio, Italy, Lingua ex Machina presents an engaging challenge to those who view the human capacity for language as a winner-take-all war between Chomsky and Darwin.
In this book Mark Steedman argues that the surface syntax of natural languages maps spoken and written forms directly to a compositional semantic representation that includes predicate-argument structure, quantification, and information structure, without constructing any intervening structural representation. His purpose is to construct a principled theory of natural grammar that is directly compatible with both explanatory linguistic accounts of a number of problematic syntactic phenomena and a straightforward computational account of the way sentences are mapped onto representations of meaning. The radical nature of Steedman's proposal stems from his claim that much of the apparent complexity of syntax, prosody, and processing follows from the lexical specification of the grammar and from the involvement of a small number of universal rule-types for combining predicates and arguments. These syntactic operations are related to the combinators of Combinatory Logic, engendering a much freer definition of derivational constituency than is traditionally assumed. This property allows Combinatory Categorial Grammar to capture elegantly the structure and interpretation of coordination and intonation contour in English as well as some well-known interactions between word order, coordination, and relativization across a number of other languages. It also allows more direct compatibility with incremental semantic interpretation during parsing.
The book covers topics in formal linguistics, intonational phonology, computational linguistics, and experimental psycholinguistics, presenting them as an integrated theory of the language faculty in a form accessible to readers from any of those fields.
Recent work in theoretical syntax has revealed the strong explanatory power of the notions of economy, competition, and optimization. Building grammars entirely upon these elements, Optimality Theory syntax provides a theory of universal grammar with a formally precise and strongly restricted theory of universal typology: cross-linguistic variation arises exclusively from the conflict among universal principles.
Beginning with a general introduction to Optimality Theory syntax, this volume provides a comprehensive overview of the state of the art, as represented by the work of the leading developers of the theory. The broad range of topics treated includes morphosyntax (case, inflection, voice, and cliticization), the syntax of reference (control, anaphora, an pronominalization), the gammar of clauses (complementizers and their absence), and grammatical and discourse effects in word order. Among the theoretical themes running throughout are the interplay between faithfulness and markedness, and various questions of typology and of inventory.
Peter Ackema, Judith Aissen, Eric Bakovic, Joan Bresnan, Hye-Won Choi, João Costa, Jane Grimshaw, Edward Keer, Géraldine Legendre, Gereon Müller, Ad Neeleman, Vieri Samek-Lodovici, Peter Sells, Margaret Speas, Sten Vikner, Colin Wilson, Ellen Woolford.
Using sentence comprehension as a case study for all of cognitive science, David Townsend and Thomas Bever offer an integration of two major approaches, the symbolic-computational and the associative-connectionist. The symbolic-computational approach emphasizes the formal manipulation of symbols that underlies creative aspects of language behavior. The associative-connectionist approach captures the intuition that most behaviors consist of accumulated habits. The authors argue that the sentence is the natural level at which associative and symbolic information merge during comprehension.
The authors develop and support an analysis-by-synthesis model that integrates associative and symbolic information in sentence comprehension. This integration resolves problems each approach faces when considered independently. The authors review classic and contemporary symbolic and associative theories of sentence comprehension, and show how recent developments in syntactic theory fit well with the integrated analysis-by-synthesis model. They offer analytic, experimental, and neurological evidence for their model and discuss its implications for broader issues in cognitive science, including the logical necessity of an integration of symbolic and connectionist approaches in the field.