Our site works best with modern browsers (Firefox, Chrome, Safari ≥6, IE ≥10, and others). You are viewing the simple version of our site: some functionality may not be available. Try switching to the modern version to see if it will work on your browser.

Volume search

  • Rules infants look by: Testing the assumption of transitivity in visual salience

    What drives infants’ attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets – defined by different features, but each equally salient when evaluated independently– would drive attention equally when pitted head-to-head. In Experiment 1, we presented infants with arrays of Gabor patches in which a target region varied either in color (hue saturation) or spatial frequency (cycles per degree) from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency that were equally salient (preferred), and pit them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.

  • Intersensory Processing Efficiency Protocol (IPEP)

    IPEP Description: The Intersensory Processing Efficiency Protocol (IPEP) is a novel protocol designed to assess fine-grained individual differences in the speed and accuracy of intersensory processing for audiovisual social and nonsocial events. It is appropriate for infants (starting at 3 months of age) as well as children and adults. The IPEP is an audiovisual search task requiring visually locating a sound-synchronized target event amidst 5 asynchronous distractor events. Visual attention is typically monitored using an eyetracker. This protocol indexes intersensory processing through 1) accuracy in selection (frequency of fixating the target), 2) accuracy in matching (duration of looking to target), and 3) speed in selection (latency to fixate the target) to social and nonsocial events (see highlight video). IPEP Method: The protocol consists of 48 8-s trials composed of 4 blocks (2 social, 2 nonsocial)—alternating between social and nonsocial blocks of 12 trials each. On each trial, participants view a 3 x 2 grid of 6 dynamic visual events along with a natural synchronized soundtrack to one (target) event. The social events consist of 6 faces of women all reciting different stories using positive affect. The nonsocial events consist of objects striking a surface in an erratic temporal pattern creating percussive sounds. On each trial, the natural synchronized soundtrack to the target face or object is played for 8 s. Trials are preceded by a 3-s attention-getter (looming and contracting smiley face). IPEP Measures: Proportion of total trials on which the target was fixated (PTTF, accuracy in selection), proportion of total looking time to the target (PTLT, accuracy in matching), and latency to fixate the target event (RT; speed in selection) are derived from eyetracking (offline coding schemes are being developed). The highlight video below presents 3 exemplar trials from each condition (social, then nonsocial) in the IPEP with the attention getter preceding each trial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software which interfaces with eye-tracking software (Tobii Studio).

  • Multisensory Attention Assessment Protocol (MAAP)

    MAAP Description: The Multimodal Attention Assessment Protocol (MAAP) is a novel procedure designed to assess individual differences in multiple components of attention to dynamic, audiovisual social and nonsocial events within a single session, derived from standard visual preference procedures and gap-overlap tasks. It is appropriate for infants and children starting at 3 months of age. The MAAP integrates into a single test, measures of three fundamental “building blocks” of attention that support the typical development of social and communicative functioning. The protocol indexes 1) duration of looking (attention maintenance), 2) speed of attention shifting, and 3) accuracy of intersensory matching to audiovisual events in the context of high competition (an irrelevant, distractor event is present) or low competition (no distractor event present). It thus provides 6 measures of attention—duration, speed, and accuracy in high vs. low competition—to social and nonsocial events. The difference between performance under high and low competition conditions reflects the cost of competing stimulation on each measure of attention (see highlight video). MAAP Method: The protocol consists of 24 13-s trials composed of 2 blocks of 12 trials: one block of social (women speaking with positive affect) and one block of nonsocial (objects being dropped into a clear container) events. On each trial, a 3-s central visual stimulus (dynamic geometric patterns) is followed by two lateral, dynamic video events for 12 s—one in synchrony with an accompanying natural soundtrack and the other out of synchrony. On half of the trials, the central visual event remains on while the lateral events are presented, providing additional competing stimulation (high competition trials) and on half of the trials the central visual event disappears as soon as the laterals events appear (low competition trials). Participants are videotaped and/or coded live by trained observers, blind to the lateral positions of the events. MAAP Measures: Duration of looking (proportion of available looking time [PALT] spent fixating either lateral event), speed of attention shifting (reaction time [RT] to shift attention to either lateral event), and accuracy of intersensory matching (proportion of total looking time [PTLT] to the sound-synchronous lateral event) are assessed under high and low competition to social and nonsocial events. The highlight video below presents 2 exemplar trials from each condition in the MAAP: 1) low competition social, 2) high competition social, 3) low competition nonsocial, 4) high competition nonsocial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software.

  • Affordances of Clay at the Clay Topos

    The “Clay Topos” is a playroom in a nursery school (Takahashi Chuo Hoikuen, Japan) where children can be surrounded by and play with up to 0.8 tons of high-quality soil clay used by artists. The Clay Topos has been conceived and realized by a Japanese sculptor Hideki Maeshima. In this collection of videos, a group of children played freely in the Clay Topos, where the structure of spontaneous activity of children and the nature of the forces that cause structuring of activities can be observed. In the first session, 11 children (from 16 to 26 month-old. Group 1) were invited in the Clay Topos for the first time to play freely for 35 minutes. Caregivers of the nursery school were present and interacted naturally with children. 3 roughly square-shaped lumps of soil clay (about 90 x 90 cm, 15-cm thick, with uneven surfaces) were placed in a row on the floor about 70 cm apart from each other. In the second session, the behavior of the same children in the same Clay Topos was recorded 5 months after the first session. Between the first and second session, these children had played in the Clay Topos for several times. Third set of recordings show the behavior of different children from different age group (29 to 38 month-old. Group 2) when they were invited for the first time in the Clay Topos. All sessions were recorded with 4 video cameras simultaneously from multiple perspectives so that it is possible to follow the behavior of each child wherever they move in the room.

  • 2016 Cognitive Science Workshop: Active Learning

    Talks and slides from a full day workshop on "Active Learning" at the 2016 Cognitive Science Society. In this workshop, we invite speakers from a variety of approaches to broadly inform our understanding of active learning, including cognitive development, education, and computational modeling. We examine what -active- means in active learning, and include talks on the cognitive mechanisms that might support active learning including attention, hypothesis-generation, explanation, pretend play, and question asking. We also explore how -efficient- learners are when planning and executing actions in the service of learning, and whether there are developmental or socio-economic differences in active learning. The workshop is divided into three main themes, with speakers from education, modeling, and developmental backgrounds in each. After each set of talks, we schedule ample time for discussion, encouraging participants to fully engage with the speakers. The workshop concludes with a final broad discussion on open questions and the future of active learning research.

  • Pubertal development shapes perception of complex facial expressions

    We previously hypothesized that pubertal development shapes the emergence of new components of face processing (Scherf et al., 2012; Garcia & Scherf, 2015). Here, we evaluate this hypothesis by investigating emerging perceptual sensitivity to complex versus basic facial expressions across pubertal development. We tested pre-pubescent children (6–8 years), age- and sex-matched adolescents in early and later stages of pubertal development (11–14 years), and sexually mature adults (18–24 years). Using a perceptual staircase procedure, participants made visual discriminations of both socially complex expressions (sexual interest, contempt) that are arguably relevant to emerging peer-oriented relationships of adolescence, and basic (happy, anger) expressions that are important even in early infancy. Only sensitivity to detect complex expressions improved as a function of pubertal development. The ability to perceive these expressions is adult-like by late puberty when adolescents become sexually mature. This pattern of results provides the first evidence that pubertal development specifically influences emerging affective components of face perception in adolescence.

  • Emergence of the ability to perceive dynamic events from still pictures in human infants

    The ability to understand a visual scene depicted in a still image is among the abilities shared by all human beings. The aim of the present study was to examine when human infants acquire the ability to perceive the dynamic events depicted in still images (implied motion perception). To this end, we tested whether 4- and 5-month-old infants shifted their gaze toward the direction cued by a dynamic running action depicted in a still figure of a person. Results indicated that the 5- but not the 4-month-olds showed a significant gaze shift toward the direction implied by the posture of the runner (Experiments 1, 2, and 3b). Moreover, the older infants showed no significant gaze shift toward the direction cued by control stimuli, which depicted a figure in a non-dynamic standing posture (Experiment 1), an inverted running figure (Experiment 2), and some of the body parts of a running figure (Experiment 3a). These results suggest that only the older infants responded in the direction of the implied running action of the still figure; thus, implied motion perception emerges around 5 months of age in human infants.

  • Infant-specific gaze patterns in response to radial optic flow

    The focus of a radial optic flow is a valid visual cue used to perceive and control the heading direction of animals. Gaze patterns in response to the focus of radial optic flow were measured in human infants (N = 100, 4–18 months) and in adults (N = 20) using an eye-tracking technique. Overall, although the adults showed an advantage in detecting the focus of an expansion flow (representing forward locomotion) against that of a contraction flow (representing backward locomotion), infants younger than 1 year showed an advantage in detecting the focus of a contraction flow. Infants aged between 13 and 18 months showed no significant advantage in detecting the focus in either the expansion or in the contraction flow. The uniqueness of the gaze patterns in response to the focus of radial optic flow in infants shows that the visual information necessary to perceive heading direction potentially differs between younger and mature individuals.

  • Examples of True Belief and False Belief Contents tasks with 4-year-olds

    We use a standard Contents False Belief (FB) task paired with a Contents True Belief (TB) task to assess children’s Belief Understanding. In each task, children are asked the test question (“What will the other person think is in the box?” and are asked to justify their answers (“Why will he think _____ is inside the box”?). The order of the TB and FB tasks are counterbalanced across children. Note: The “Highlights” show children answering the TB test question. Go to the individual participants' folder (just below the "Highlights" section under "Data") in order to see the video which shows the entirety of the TB and FB tasks. The full data forms and task scripts are available in the "Materials" folder. One child illustrates Reality Reasoning (RR). He fails the FB task and passes the TB task. In both he says that the other person will “get it right” and think that the actual contents are in the box, and justifies his answers by saying, in effect, “because that’s what’s in there.” One child illustrates Perceptual Access Reasoning (PAR). He passes the FB task and fails the TB task. In both cases he says that the other person will “get it wrong” and think that what isn’t in the box is in there (crayons in false belief, key in true belief), and justifies his answers by saying, in effect, “because he doesn’t know what’s in there.” One child illustrates Belief Reasoning (BR). He passes the FB task and passes the TB task. In both cases he says that the other person will think the typical contents are in the box, and justifies his answers by saying, “because it’s a [crayon / M&M’s] box.” In a large (N = 161) longitudinal study of ours, all children at 4 ½ and at 6 years of age could be reliably classified as using RR, a mixture of RR & PAR, PAR, or BR. Scoring criteria are available upon request.

  • XX ICIS 2016 New Orleans Invited Program

    Invited program International Congress on Infant Studies. WEDNESDAY MAY 25, 2016 Preconference: Best Practices in Infancy Research Jessica Sommerville, Kiley Hamlin, Lisa Oakes, John Colombo, Michael Frank, Laura Namy, Lisa Freund, Marita Hopmann, Infancy/ICIS panel Preconference: Databrary & Datavyu Karen Adolph, Rick O. Gilmore THURSDAY MAY 26, 2016 Plenary Speaker: Birth of a Word Deb Roy Invited Speaker: Nutrition & Early Child Development: The First 1000 Days Maureen Black Invited Speaker: Human Amygdala: PFC Circuit Development & the Role of Caregiving Nim Tottenham Invited Symposium: Development of Attentional Control in Infancy: Insights from Eye-Movements Shannon Ross-Sheehy, Sam Wass, Susan Rivera FRIDAY MAY 27, 2016 Presidential Address: Oh, Behave! Karen Adolph Invited Speaker: Ontogeny of Social Visual Engagement in Infants and Toddlers with Autism Spectrum Disorder Ami Klin Views by Two: Comparative & Developmental Methods in Social Cognition Laurie Santos, Felix Warneken Views by Two: Language Learning in Multiple Language Contexts Jesse Snedeker, Catherine Tamis-LeMonda Views by Two: Learning from Multiple Inputs by Humans and Robots Linda Smith, Pierre-Yves Oudeyer Dedicated Session: Carolyn Rovee-Collier: Her Legacy for Science, Practice, and Academic Leadership Andrew Meltzoff, Rachel Barr, Kimberly Boller, Harlene Hayne Presidential Symposium: Global Issues in Development Sandra Waxman, Cristine Legare SATURDAY MAY 28, 2016 Invited Speaker: Learning in Infancy: A Rational Response to Stability and Change Dick Aslin Invited Speaker: Just Babies: The Origin of Good and Evil Paul Bloom Views by Two: Studying Autism with Technology – Sponsored by Positive Science Jim Rehg, Brian Scassellati Dedicated Session: In Honor of Gerald Turkewitz: His Scientific Legacy – Sponsored by Johnson and Johnson Consumer Inc. David J. Lewkowicz, Robert Lickliter, David S. Moore, Janet Werker Invited Symposium: Methods & Meanings: New Insights into Infant Emotional Processes Koraly Perez-Edgar, Daniel Messinger Awards Ceremony

  • The Social Origins of Sustained Attention in One-Year-Old Human Infants

    The ability to sustain attention is a major achievement in human development and is generally believed to be the developmental product of increasing self-regulatory and endogenous (i.e., internal, top-down, voluntary) control over one’s attention and cognitive systems [ 1–5 ]. Because sustained attention in late infancy is predictive of future development, and because early deficits in sustained attention are markers for later diagnoses of attentional disorders [ 6 ], sustained attention is often viewed as a constitutional and individual property of the infant [ 6–9 ]. However, humans are social animals; developmental pathways for seemingly non-social competencies evolved within the social group and therefore may be dependent on social experience [ 10–13 ]. Here, we show that social context matters for the duration of sustained attention episodes in one-year-old infants during toy play. Using head-mounted eye tracking to record moment-by-moment gaze data from both parents and infants, we found that when the social partner (parent) visually attended to the object to which infant attention was directed, infants, after the parent’s look, extended their duration of visual attention to the object. Looks to the same object by two social partners is a well-studied phenomenon known as joint attention, which has been shown to be critical to early learning and to the development of social skills [ 14, 15 ]. The present findings implicate joint attention in the development of the child’s own sustained attention and thus challenge the current understanding of the origins of individual differences in sustained attention, providing a new and potentially malleable developmental pathway to the self-regulation of attention.

  • EEG Asymmetry and ERN: Behavioral Outcomes in Preschoolers

    Paper in press at PLOS One examining associations between the error-related negativity (ERN) and alpha asymmetry in preschoolers. Similar to research on adults, greater left asymmetry (i.e., greater approach-related neural activity) was correlated with reduced ERN amplitude (i.e., weaker inhibition-related neural activity). The interactive effect of asymmetry and ERN amplitude did not predict ADHD symptoms, but did predict social inhibition. When ERN was greater, less left asymmetry was associated with higher levels of social inhibition. Results were most prominent at parietal EEG sites. Implications for understanding the development of the overlap in neural systems of approach and inhibition are discussed.

  • Pupillary Contagion in Infancy: Evidence for spontaneous transfer of arousal

    Pupillary contagion – responding to observed pupil size with changes in one’s own pupil – has been observed in adults and suggests that arousal and other internal states could be transferred across individuals using a subtle physiological cue. Examining this phenomenon developmentally gives insight into its origins and underlying mechanisms, such as whether it is an automatic adaptation already present in infancy. In the current study, 6- and 9-month-olds viewed schematic depictions of eyes with smaller and larger pupils – pairs of circles with smaller and larger black centers - while their own pupil size was recorded. Control stimuli were comparable squares. For both age groups, infants’ pupil size was greater when viewing large-centered than small-centered circles and no differences were found for squares. The findings suggest that infants are sensitive and responsive to subtle cues to others’ internal states, a mechanism that would be beneficial for early social development.

  • The Novel Object and Unusual Name (NOUN) Database: A collection of novel images for use in experimental research

    Many experimental research designs require images of novel objects. Here we introduce the Novel Object and Unusual Name (NOUN) Database. This database contains 64 primary novel object images and additional novel exemplars for ten basic- and nine global-level object categories. The objects’ novelty was confirmed by both self-report and a lack of consensus on questions that required participants to name and identify the objects. We also found that object novelty correlated with qualifying naming responses pertaining to the objects’ colors. The results from a similarity sorting task (and a subsequent multidimensional scaling analysis on the similarity ratings) demonstrated that the objects are complex and distinct entities that vary along several featural dimensions beyond simply shape and color. A final experiment confirmed that additional item exemplars comprised both sub- and superordinate categories. These images may be useful in a variety of settings, particularly for developmental psychology and other research in the language, categorization, perception, visual memory, and related domains.

  • Infants' use of social information for guiding locomotion over slopes

    Excerpts from several studies on infants' use of social information for guiding locomotion over slopes. Adolph, K. E., Karasik, L. B., & Tamis-LeMonda, C. S. (2010). Using social information to guide action: Infants’ locomotion over slippery slopes. Neural Networks, 23, 1033-1042. [Special issue on social cognition]. Adolph, K. E., Tamis-LeMonda, C. S., Ishak, S., Karasik, L. B., & Lobo, S. A. (2008). Locomotor experience and use of social information are posture specific. Developmental Psychology, 44, 1705-1714. 68. Karasik, L. B.*, Tamis-LeMonda, C. S., Adolph, K. E., & Dimitropoulou, K. A.* (2008). How mothers encourage and discourage infants’ motor actions. Infancy, 13, 366-392. Tamis-LeMonda, C. S., Adolph, K. E., Lobo, S. A., Karasik, L. B., Dimitroupoulou, K. D., & Ishak, S. (2008). When infants take mothers’ advice: 18-month-olds integrate perceptual and social information for guiding motor action. Developmental Psychology, 44, 734-746.

  • Being Sticker Rich: Numerical Context Influences Children’s Sharing Behavior

    Abstract: Young children spontaneously share resources with anonymous recipients, but little is known about the specific circumstances that promote or hinder these prosocial tendencies. Children (ages 3-11) received a small (12) or large (30) number of stickers, and were then given the opportunity to share their windfall with either one or multiple anonymous recipients (Dictator Game). Whether a child chose to share or not varied as a function of age, but was uninfluenced by numerical context. Moreover, children’s giving was consistent with a proportion-based account, such that children typically donated a similar proportion (but different absolute number) of the resources given to them, regardless of whether they originally received a small or large windfall. The proportion of resources donated, however did vary based on the number of recipients with whom they were allowed to share, such that on average, children shared more when there were more recipients available, particularly when they had more resources, suggesting they take others into consideration when making prosocial decisions. Finally, results indicated that a child’s gender also predicted sharing behavior, with males generally sharing more resources than females. Together, findings suggest that the numerical contexts under which children are asked to share, as well as the quantity of resources that they have to share, may interact to promote (or hinder) altruistic behaviors throughout childhood.

  • Active vision in passive locomotion: real-world free viewing in infants and adults

    Visual exploration in infants and adults has been studied using two very different paradigms: free viewing of flat screen displays in desk-mounted eye-tracking studies and real-world visual guidance of action in head-mounted eye-tracking studies. To test whether classic findings from screen-based studies generalize to real-world visual exploration and to compare natural visual exploration in infants and adults, we tested observers in a new paradigm that combines critical aspects of both previous techniques: free viewing during real-world visual exploration. Mothers and their 9-month-old infants wore head-mounted eye trackers while mothers carried their infants in a forward-facing infant carrier through a series of indoor hallways. Demands for visual guidance of action were minimal in mothers and absent for infants, so both engaged in free viewing while moving through the environment. Similar to screen-based studies, during free viewing in the real world low-level saliency was related to gaze direction. In contrast to screen-based studies, only infants - not adults - were biased to look at people, participants of both ages did not show a classic center bias, and mothers and infants did not display high levels of inter-observer consistency. Results indicate that several aspects of visual exploration of a flat screen display do not generalize to visual exploration in the real world.

  • From faces to hands: Changing visual input in the first two years

    Human development takes place in a social context. Two pervasive sources of social information are faces and hands. Here, we provide the first report of the visual frequency of faces and hands in the everyday scenes available to infants. These scenes were collected by having infants wear head cameras during unconstrained everyday activities. Our corpus of 143 hours of infant-perspective scenes, collected from 34 infants aged 1 month to 2 years, was sampled for analysis at 1/5 Hz. The major finding from this corpus is that the faces and hands of social partners are not equally available throughout the first two years of life. Instead, there is an earlier period of dense face input and a later period of dense hand input. At all ages, hands in these scenes were primarily in contact with objects and the spatio-temporal co-occurrence of hands and faces was greater than expected by chance. The orderliness of the shift from faces to hands suggests a principled transition in the contents of visual experiences and is discussed in terms of the role of developmental gates on the timing and statistics of visual experiences.

  • Facial expressions in 4-month-olds in the still face paradigm and later attachment at 12 months in the Strange Situation

    Four-month-old infants and their parents were video-recorded in the Face-to-Face/Still-Face (FFSF) procedure (Adamson & Frick, 2003). Infants sat in a seat and their parents sat across from them. Parents and infants engaged in 2 minutes of natural positive interaction, then 2 minutes of parent still-face in which parents were asked to not respond to their infants, and then 2 minutes of renewed interaction. At 12 months, these infants and parents then completed the Strange Situation to assess attachment security, which consists of a series of separations and reunions.