Our site works best with modern browsers (Firefox, Chrome, Safari ≥6, IE ≥10, and others). You are viewing the simple version of our site: some functionality may not be available. Try switching to the modern version to see if it will work on your browser.

Volume search

  • The development of predictive processes in children’s discourse understanding

    We investigate children's online predictive processing as it occurs naturally, in conversation. We showed 1-7 year-olds short videos of improvised conversation between puppets, controlling for available linguistic information through phonetic manipulation. Even one- and two-year-old children made accurate and spontaneous predictions about when a turn-switch would occur: they gazed at the upcoming speaker before they heard a response begin. This predictive skill relies on both lexical and prosodic information together, and is not tied to either type of information alone. We suggest that children integrate prosodic, lexical, and visual information to effectively predict upcoming linguistic material in conversation.

  • Visual search and attention to faces during early infancy

    Newborn babies look preferentially at faces and face-like displays, yet over the course of their first year much changes about both the way infants process visual stimuli and how they allocate their attention to the social world. Despite this initial preference for faces in restricted contexts, the amount that infants look at faces increases considerably during the first year. Is this development related to changes in attentional orienting abilities? We explored this possibility by showing 3-, 6-, and 9-month-olds engaging animated and live-action videos of social stimuli and also measuring their visual search performance with both moving and static search displays. Replicating previous findings, looking at faces increased with age; in addition, the amount of looking at faces was strongly related to the youngest infants’ performance in visual search. These results suggest that infants’ attentional abilities may be an important factor in facilitating their social attention early in development.

  • Measuring the development of social attention using free-viewing

    How do young children direct their attention to other people in the natural world? Although many studies have examined the perception of faces and of goal-directed actions, relatively little work has focused on what children will look at in complex and unconstrained viewing environments. To address this question, we showed videos of objects, faces, children playing with toys, and complex social scenes to a large sample of infants and toddlers between 3 and 30 months old. We found systematic developmental changes in what children looked at. When viewing faces alone, younger children looked more at eyes and older children more at mouths, especially when the faces were making expressions or talking. In the more complex videos, older children looked more at hands than younger children, especially when the hands were performing actions. Our results suggest that as children develop they become better able to direct their attention to the parts of complex scenes that are most interesting socially.

  • Number as a cognitive technology: Evidence from Pirahã language and cognition

    Does speaking a language without number words change the way speakers of that language perceive exact quantities? The Pirahã are an Amazonian tribe who have been previously studied for their limited numerical system [Gordon, P. (2004). Numerical cognition without words: Evidence from Amazonia. Science 306, 496–499]. We show that the Pirahã have no linguistic method whatsoever for expressing exact quantity, not even “one.” Despite this lack, when retested on the matching tasks used by Gordon, Pirahã speakers were able to perform exact matches with large numbers of objects perfectly but, as previously reported, they were inaccurate on matching tasks involving memory. These results suggest that language for exact number is a cultural invention rather than a linguistic universal, and that number words do not change our underlying representations of number but instead are a cognitive technology for keeping track of the cardinality of large sets across time, space, and changes in modality.

  • Development of infants’ attention to faces during the first year

    In simple tests of preference, infants as young as newborns prefer faces and face-like stimuli over distractors. Little is known, however, about the development of attention to faces in complex scenes. We recorded eye-movements of 3-, 6-, and 9-month-old infants and adults during free-viewing of clips from A Charlie Brown Christmas (an animated film). The tendency to look at faces increased with age. Using novel computational tools, we found that 3-month-olds were less consistent (across individuals) in where they looked than were older infants. Moreover, younger infants’ fixations were best predicted by low-level image salience, rather than the locations of faces. Between 3 and 9 months of age, infants gradually focused their attention on faces. We discuss several possible interpretations of this shift in terms of social development, cross-modal integration, and attentional/executive control.

  • Representing exact number visually using mental abacus

    Mental abacus (MA) is a system for performing rapid and precise arithmetic by manipulating a mental representation of an abacus, a physical calculation device. Previous work has speculated that MA is based on visual imagery, suggesting that it might be a method of representing exact number nonlinguistically, but given the limitations on visual working memory, it is unknown how MA structures could be stored. We investigated the structure of the representations underlying MA in a group of children in India. Our results suggest that MA is represented in visual working memory by splitting the abacus into a series of columns, each of which is independently stored as a unit with its own detailed substructure. In addition, we show that the computations of practiced MA users (but not those of control participants) are relatively insensitive to verbal interference, consistent with the hypothesis that MA is a nonlinguistic format for exact numerical computation.