Our site works best with modern browsers (Firefox, Chrome, Safari ≥6, IE ≥10, and others). You are viewing the simple version of our site: some functionality may not be available. Try switching to the modern version to see if it will work on your browser.

Volume search

  • Individual data of "Shirai et al. (submitted). Differences in the magnitude of representational momentum between school-aged children and adults as a function of experimental task"

    Representational momentum (RM) is the phenomenon that occurs when an object moves and then disappears, and the recalled final position of the object shifts in the direction of its motion. Findings on the magnitude of RM exhibited during childhood are not completely consistent. Some findings indicate that the magnitude of RM in early childhood is comparable to that in adulthood, while others suggest that the magnitude of RM is significantly greater in childhood than in adulthood. We examined whether the inconsistencies between the previous studies could be explained by the difference in the experimental tasks used in these studies. In one study, a same–different judgment between the position at which a moving stimulus disappeared and that at which a comparison stimulus reappeared, called a judging task, was used. In another study, the participants pointed to the position at which the moving stimulus disappeared using a computer mouse cursor. This was called a pointing task. We examined RM in both the judging and pointing tasks and found that, for the judgment task, there was no significant difference in the magnitude of RM among all age groups tested: younger children, mean age = 7.4 years; older children, mean age = 10.7 years; adults, mean age = 22.1. However, in the pointing task, the younger children exhibited a significantly greater RM magnitude than did adults. We discuss possible reasons for the inconsistent developmental trends shown by the two different tasks, namely lower visual motion sensitivity in childhood.

  • Asking children to "be helpers" backfires after setbacks

    Describing behaviors as reflecting categories (e.g., asking children to “be helpers”) has been found to increase pro-social behavior. The present studies (N = 139, ages 4-5) tested whether such effects backfire if children have difficulty performing category-relevant actions. In Study 1, children were asked to “be helpers” or “to help,” and then pretended to complete a series of successful (e.g., pouring milk) and unsuccessful (e.g., spilling milk while trying to pour it) scenarios. After the unsuccessful trials, children asked to “be helpers” had more negative attitudes. In Study 2, asking children to “be helpers” impeded children’s actual helping behavior after they experienced difficulties while trying to help. Implications for how labels shape beliefs and behavior will be discussed.

  • Excerpt volume: Illustrations from motor development of Siegler's "Overlapping Waves Model" of strategy choice

    Siegler's "Overlapping Waves Model" proposes that strategies enter children's repertoires at different times and change in frequency over development as behavior becomes more adaptive and functional. Here, we illustrate the Overlapping Waves Model with video clips from several studies in infant and child motor development. (See links for specific papers relevant to the excerpts). Illustration #1: Infants' use of various strategies for descending impossibly steep slopes (backing feet first, sliding head first, sitting, crawling, walking, avoiding descent). Longitudinal observations show that strategies entered infants' repertoires at various ages and changed in frequency as infants became more accurate at detecting affordances for descent over weeks of crawling and walking. See links for Illustration #1-Longitudinal. Cross-sectional observations show that individual infants use multiple strategies within a single session. See links for Illustration #1-Cross-sectional. Illustration #2: Walking infants use multiple strategies for descending drop-offs (backing feet first, sitting, crawling, walking, avoiding descent). See links for Illustration #2-Drop-offs. Illustration #3: 4-year-olds use multiple strategies for pounding a peg with a hammer (radial grip, ulnar grip, two-handed grip, etc.). See links for Illustration #3-Hammering.

  • Individual data of "Shirai et al. (submitted). Development of asymmetric vection for radial expansion/contraction motion: comparison between school-age children and adults"

    Vection is illusory self-motion elicited by visual stimuli, and is more easily induced by radial contraction than expansion flow in adults. The asymmetric feature of vection was re-examined with 18 younger (age, 6–8 years) and 19 older children (age, 9–11 years) and 20 adults. In each experimental trial, participants observed either radial expansion or contraction flow; the latency, cumulative duration, and magnitude of vection were measured. The results indicated that the latency for contraction was significantly shorter than that for expansion in all age groups. Additionally, the latency and magnitude were significantly shorter and greater, respectively, in the younger/older children compared with the adults, regardless of the flow pattern. These results indicate that the asymmetry in vection for expansion/contraction flow emerges by school age, and that school-age children experience significantly more rapid and stronger vection than adults.

  • Databrary User Interviews

    The volume has been created to house videos of Databrary user interviews on a variety of topics (e.g., Datavyu transcription interface) for internal use in application development. If you'd like to participate in such an interview, please email info@databrary.org.

  • Rules infants look by: Testing the assumption of transitivity in visual salience

    What drives infants’ attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets – defined by different features, but each equally salient when evaluated independently– would drive attention equally when pitted head-to-head. In Experiment 1, we presented infants with arrays of Gabor patches in which a target region varied either in color (hue saturation) or spatial frequency (cycles per degree) from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency that were equally salient (preferred), and pit them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.

  • Intersensory Processing Efficiency Protocol (IPEP)

    IPEP Description: The Intersensory Processing Efficiency Protocol (IPEP) is a novel protocol designed to assess fine-grained individual differences in the speed and accuracy of intersensory processing for audiovisual social and nonsocial events. It is appropriate for infants (starting at 3 months of age) as well as children and adults. The IPEP is an audiovisual search task requiring visually locating a sound-synchronized target event amidst 5 asynchronous distractor events. Visual attention is typically monitored using an eyetracker. This protocol indexes intersensory processing through 1) accuracy in selection (frequency of fixating the target), 2) accuracy in matching (duration of looking to target), and 3) speed in selection (latency to fixate the target) to social and nonsocial events (see highlight video). IPEP Method: The protocol consists of 48 8-s trials composed of 4 blocks (2 social, 2 nonsocial)—alternating between social and nonsocial blocks of 12 trials each. On each trial, participants view a 3 x 2 grid of 6 dynamic visual events along with a natural synchronized soundtrack to one (target) event. The social events consist of 6 faces of women all reciting different stories using positive affect. The nonsocial events consist of objects striking a surface in an erratic temporal pattern creating percussive sounds. On each trial, the natural synchronized soundtrack to the target face or object is played for 8 s. Trials are preceded by a 3-s attention-getter (looming and contracting smiley face). IPEP Measures: Proportion of total trials on which the target was fixated (PTTF, accuracy in selection), proportion of total looking time to the target (PTLT, accuracy in matching), and latency to fixate the target event (RT; speed in selection) are derived from eyetracking (offline coding schemes are being developed). The highlight video below presents 3 exemplar trials from each condition (social, then nonsocial) in the IPEP with the attention getter preceding each trial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software which interfaces with eye-tracking software (Tobii Studio).

  • SEEDLingS 6 Month

    These files are part of our longitudinal study, Study of Environmental Effects on Developing Linguistic Skills (SEEDLingS). This volume only includes recordings taken at 6 months of age. The recordings in this volume were analyzed for the Bergelson & Aslin citation above, alongside eyetracking data. (the code and eyetracking data for the paper will be shared via github link below once PNAS embargo is lifted). The broader project is described below: SEEDLingS is a project exploring how infants' early linguistic and environmental input plays a role in their learning. We focus on understanding how babies learn words between 6 and 18 months of age from the visual, social, and linguistic world around them. By looking at the complex environment that babies are exposed to, from their perspective, we can attempt to decode how the developing mind interprets and organizes the objects and words it faces. SEEDLingS is unique in that it combines well-controlled studies in the lab that assess what words infants know, with in-the-home audio and video recordings of what words infants hear, and what they see when they hear these words. Video and audio recordings were generated in the home every month, from 6 to 17 months of age, for a set of 44 infants. The goal of this study is to assess infants' language growth over this time period, particularly in the word learning domain. Every two months, infants came into the lab for an eye-tracking study to test their word comprehension (and for older infants, their word production). This volume includes the audio and video recordings from 6 month home visits. Corresponding test dates for each audio and video recording are included as a supplementary spreadsheet, which can be accessed in the materials folder of this volume. The day-long audio recordings were generated using child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States) worn by the infant. The audio recordings are generated from one single LENA audio recording, converted from LENA's propriety algorithmic output (.its) for annotation in CHA format. The hour-long video recordings show a composite view of infants' typical lives with 1-4 camera feeds. In the standard setup, infants are equipped with 2 headcams, and a centralized camcorder that captures the entire room. The precise arrangement and number of cameras varies per video, as a function of whether the child would wear the hat with the cameras, and whether the cameras' files became corrupt during the recordings. Shared files have been scrubbed for certain personal information (e.g. full names, addresses, etc.); this leads to some silent periods on the audio track and some black-out periods on the video track. Only sections of the files that have been verified to contain no extremely personal content by human listeners (or from which such info has been scrubbed) are shared here. If you notice anything that you believe we may have missed in terms of personal information, please contact us as soon as possible so we can rectify the issue. Infants in this sample are from the upstate New York area. The sample is generally middle class, with a range of income and an above-average maternal education level. The sample is predominantly white. All infants heard majority English at home (>75%) and had no known vision or hearing issues at birth. Please contact Elika Bergelson directly to discuss further aspects of the sample design, annotation, and analysis at elika.bergelson@gmail.com These data were collected at the University of Rochester, and continue to be analyzed presently at Duke University. Further details of the project are available on our website, wiki, and GitHub repo, linked below.

  • Multisensory Attention Assessment Protocol (MAAP)

    MAAP Description: The Multimodal Attention Assessment Protocol (MAAP) is a novel procedure designed to assess individual differences in multiple components of attention to dynamic, audiovisual social and nonsocial events within a single session, derived from standard visual preference procedures and gap-overlap tasks. It is appropriate for infants and children starting at 3 months of age. The MAAP integrates into a single test, measures of three fundamental “building blocks” of attention that support the typical development of social and communicative functioning. The protocol indexes 1) duration of looking (attention maintenance), 2) speed of attention shifting, and 3) accuracy of intersensory matching to audiovisual events in the context of high competition (an irrelevant, distractor event is present) or low competition (no distractor event present). It thus provides 6 measures of attention—duration, speed, and accuracy in high vs. low competition—to social and nonsocial events. The difference between performance under high and low competition conditions reflects the cost of competing stimulation on each measure of attention (see highlight video). MAAP Method: The protocol consists of 24 13-s trials composed of 2 blocks of 12 trials: one block of social (women speaking with positive affect) and one block of nonsocial (objects being dropped into a clear container) events. On each trial, a 3-s central visual stimulus (dynamic geometric patterns) is followed by two lateral, dynamic video events for 12 s—one in synchrony with an accompanying natural soundtrack and the other out of synchrony. On half of the trials, the central visual event remains on while the lateral events are presented, providing additional competing stimulation (high competition trials) and on half of the trials the central visual event disappears as soon as the laterals events appear (low competition trials). Participants are videotaped and/or coded live by trained observers, blind to the lateral positions of the events. MAAP Measures: Duration of looking (proportion of available looking time [PALT] spent fixating either lateral event), speed of attention shifting (reaction time [RT] to shift attention to either lateral event), and accuracy of intersensory matching (proportion of total looking time [PTLT] to the sound-synchronous lateral event) are assessed under high and low competition to social and nonsocial events. The highlight video below presents 2 exemplar trials from each condition in the MAAP: 1) low competition social, 2) high competition social, 3) low competition nonsocial, 4) high competition nonsocial. Note: the precision of audiovisual synchrony viewed in these examples may vary depending on internet connection type and available bandwidth and may not reflect the actual temporal synchronies. The full protocol is played using custom-designed Matlab software.

  • Affordances of Clay at the Clay Topos

    The “Clay Topos” is a playroom in a nursery school (Takahashi Chuo Hoikuen, Japan) where children can be surrounded by and play with up to 0.8 tons of high-quality soil clay used by artists. The Clay Topos has been conceived and realized by a Japanese sculptor Hideki Maeshima. In this collection of videos, a group of children played freely in the Clay Topos, where the structure of spontaneous activity of children and the nature of the forces that cause structuring of activities can be observed. In the first session, 11 children (from 16 to 26 month-old. Group 1) were invited in the Clay Topos for the first time to play freely for 35 minutes. Caregivers of the nursery school were present and interacted naturally with children. 3 roughly square-shaped lumps of soil clay (about 90 x 90 cm, 15-cm thick, with uneven surfaces) were placed in a row on the floor about 70 cm apart from each other. In the second session, the behavior of the same children in the same Clay Topos was recorded 5 months after the first session. Between the first and second session, these children had played in the Clay Topos for several times. Third set of recordings show the behavior of different children from different age group (29 to 38 month-old. Group 2) when they were invited for the first time in the Clay Topos. All sessions were recorded with 4 video cameras simultaneously from multiple perspectives so that it is possible to follow the behavior of each child wherever they move in the room.

  • 2016 Cognitive Science Workshop: Active Learning

    Talks and slides from a full day workshop on "Active Learning" at the 2016 Cognitive Science Society. In this workshop, we invite speakers from a variety of approaches to broadly inform our understanding of active learning, including cognitive development, education, and computational modeling. We examine what -active- means in active learning, and include talks on the cognitive mechanisms that might support active learning including attention, hypothesis-generation, explanation, pretend play, and question asking. We also explore how -efficient- learners are when planning and executing actions in the service of learning, and whether there are developmental or socio-economic differences in active learning. The workshop is divided into three main themes, with speakers from education, modeling, and developmental backgrounds in each. After each set of talks, we schedule ample time for discussion, encouraging participants to fully engage with the speakers. The workshop concludes with a final broad discussion on open questions and the future of active learning research.

  • Pubertal development shapes perception of complex facial expressions

    We previously hypothesized that pubertal development shapes the emergence of new components of face processing (Scherf et al., 2012; Garcia & Scherf, 2015). Here, we evaluate this hypothesis by investigating emerging perceptual sensitivity to complex versus basic facial expressions across pubertal development. We tested pre-pubescent children (6–8 years), age- and sex-matched adolescents in early and later stages of pubertal development (11–14 years), and sexually mature adults (18–24 years). Using a perceptual staircase procedure, participants made visual discriminations of both socially complex expressions (sexual interest, contempt) that are arguably relevant to emerging peer-oriented relationships of adolescence, and basic (happy, anger) expressions that are important even in early infancy. Only sensitivity to detect complex expressions improved as a function of pubertal development. The ability to perceive these expressions is adult-like by late puberty when adolescents become sexually mature. This pattern of results provides the first evidence that pubertal development specifically influences emerging affective components of face perception in adolescence.

  • Emergence of the ability to perceive dynamic events from still pictures in human infants

    The ability to understand a visual scene depicted in a still image is among the abilities shared by all human beings. The aim of the present study was to examine when human infants acquire the ability to perceive the dynamic events depicted in still images (implied motion perception). To this end, we tested whether 4- and 5-month-old infants shifted their gaze toward the direction cued by a dynamic running action depicted in a still figure of a person. Results indicated that the 5- but not the 4-month-olds showed a significant gaze shift toward the direction implied by the posture of the runner (Experiments 1, 2, and 3b). Moreover, the older infants showed no significant gaze shift toward the direction cued by control stimuli, which depicted a figure in a non-dynamic standing posture (Experiment 1), an inverted running figure (Experiment 2), and some of the body parts of a running figure (Experiment 3a). These results suggest that only the older infants responded in the direction of the implied running action of the still figure; thus, implied motion perception emerges around 5 months of age in human infants.

  • Infant-specific gaze patterns in response to radial optic flow

    The focus of a radial optic flow is a valid visual cue used to perceive and control the heading direction of animals. Gaze patterns in response to the focus of radial optic flow were measured in human infants (N = 100, 4–18 months) and in adults (N = 20) using an eye-tracking technique. Overall, although the adults showed an advantage in detecting the focus of an expansion flow (representing forward locomotion) against that of a contraction flow (representing backward locomotion), infants younger than 1 year showed an advantage in detecting the focus of a contraction flow. Infants aged between 13 and 18 months showed no significant advantage in detecting the focus in either the expansion or in the contraction flow. The uniqueness of the gaze patterns in response to the focus of radial optic flow in infants shows that the visual information necessary to perceive heading direction potentially differs between younger and mature individuals.