Our perception of the world is shaped by our past experiences and expectations, made possible by the incredible flexibility of brain circuits. Still, the neural changes that occur to support adaptive behavior in a dynamically changing environment are not well understood.
To further our understanding of the neural basis of behavior, the Visual Behavior 2P project utilized the Allen Brain Observatory (schematized below) to collect a large-scale, highly standardized dataset consisting of recordings of neural activity in mice that have learned to perform a visually-guided task. This dataset can be used to investigate how patterns of activity across the visual cortex are influenced by experience and expectation, as mice are exposed to new stimuli or unexpected events during behavior. In addition, fluctuations in attention and motivation can reveal the impact of internal states such as task engagement on neural representations.
This dataset includes neural and behavioral measurements from 107 mice, including 4787 behavior training sessions and 704 in vivo imaging sessions, resulting in longitudinal recordings from 50,482 cortical cells. These data are made openly accessible, with all recorded timeseries, behavioral events, and experimental metadata conveniently packaged in Neurodata Without Borders (NWB) files that can be accessed and analyzed using our open source Python software package, the AllenSDK.
The Visual Behavior project is built upon a change detection behavioral task. Briefly, in this go/no-go task, mice are presented with a continuous series of flashed stimuli and they earn water rewards by correctly reporting when the identity of the flashed image changes (schematized below from Groblewski & Ollerenshaw, 2020).
Mice undergo standardized training, in which they first learn to detect orientation changes of static grating stimuli, then transition to flashed gratings, and subsequently learn to detect changes in the identity of natural scene images (images chosen from the Visual Coding Allen Brain Observatory set of 118 Natural Images). Once mice are well trained in this task, they transition to performing the task under the 2-photon microscope to enable simultaneous measurement of neural activity and behavior in response to both the familiar, trained image set, and a novel image set.
During task performance, the timing of behavioral responses (licks) and earned rewards are recorded, as well as mouse running speed on the disk. During imaging sessions, eye position and pupil area are also recorded as a measurement of overall arousal and state.
This dataset includes single- and multi-plane 2-photon calcium imaging recordings from the visual cortex of transgenic mice expressing the calcium indicator GCaMP6f in populations of excitatory (Slc17a7-IRES2-Cre;Camk2a-tTA;Ai93(TITL-GCaMP6)) and inhibitory (Vip-IRES-Cre;Ai148(TIT2L-GC6f-ICL-tTA2) & Sst-IRES-Cre;Ai148(TIT2L-GC6f-ICL-tTA2)) neurons.
During the imaging portion of the experiment, mice perform the task with the set of natural images they viewed during training (indicated in red in the schematic below), as well as a novel set of images that they have not seen before (indicated in blue). This allows evaluation of the impact of novelty on neural coding for stimulus and behavioral information. Mice also undergo passive viewing sessions during the imaging phase, during which they observe the stimulus in open loop mode and are unable to earn water rewards. Lastly, during imaging sessions only, the expected cadence of flashed images was disrupted by omitting 5% of non-change image flashes.
To ensure that any observed effects of stimulus novelty are due to lack of experience with those images, rather than properties of the specific images that were used, we trained a subset of mice with the opposite stimulus configuration such that the training set for these mice was the image set that was novel for a different group of mice (compare dataset variants VisualBehavior and VisualBehaviorTask1B in the schematic below).
A key aspect of the experimental design involved the repeated targeting of populations of neurons across multiple days, allowing analysis of single cell changes across behavioral and sensory conditions, including familiar and novel stimuli, engaged behavior, and passive viewing.
The table below describes the numbers of mice, sessions, imaging planes, and unique recorded neurons for each transgenic line for each of the dataset variants:
Access the data via the AllenSDK
Download the Technical White Paper
Have questions about the dataset? Check out the community forum
Update your browser to view this website correctly.