Browsing by Author "Gao, Sibo"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Hippocampal Encoding of Space Induced by Novel Auditory VR System using One-Photon Miniaturized Microscope(2020-04-24) Gao, Sibo; Kemere, Caleb; McGinley, MatthewIn virtual reality settings, spatial navigation in animal models has traditionally been studied using primarily visual cues. However, auditory cues play an important role in navigation for animals, especially when the visual system cannot detect objects or predators in the dark. We have developed a virtual reality system defined exclusively by auditory landmarks for head-fixed mice performing a navigation task. We report behavioral evidence that mice can learn to navigate in our task. Namely, we observed anticipatory licking and modest anticipatory slowing preceding the reward region. Furthermore, we found that the animal’s licking behavior changes when switching from a familiar virtual environment to a novel virtual environment, followed by reverting to normal licking behavior after the familiar virtual environment is re-introduced within the same session. While animals carried out the task, we performed in-vivo calcium imaging in the CA1 region of the hippocampus using a modified Miniscope system. We envision that this approach has the potential to provide new insight into how animals respond to stimuli using spatial aspects of sound in an environment. (abstract adapted from Gao et al., EMBC’20, forthcoming).Item Embargo Treadmill-IO: a novel multi-modal VR tool for studying learning of complex rodent behaviors(2024-10-25) Gao, Sibo; Kemere, CalebTraditionally, spatial navigation in animal models in virtual reality (VR) settings has been studied primarily using visual cues. However, few studies have investigated VR navigation in environments promoting interactions between the auditory system and hippocampus. Here I present a novel multi-modal virtual reality system that can be defined by either visual, sound, or both stimuli that are modulated based on the animal’s real-time position. To examine how the hippocampus represents the visual and sound environment, I developed a hippocampus-depend task where animals are trained to lick for a reward on each lap in the reward zone. I report behavioral evidence that mice can learn to navigate in our sound VR task. Similarly, in the visual VR environment, I replaced the sound stimuli with different types of visual stimuli in the same location to preserve the spatial information for both types of VR environments and observed the same result. There has been an increasing volume of research that requires a large amount of resources used on high-throughput animal training in difficult tasks. Evidently, how to make informed decisions early in the training is important to any experimenter so that valuable resources and time are not wasted on animals that are not able to learn. Here I present possible parameters that could differentiate learners from non-learners, namely lick probability, lick selectivity, lick rate, percentage of valid laps, average speed, and lick latency. I observe that learners have a higher lick probability, and low lick latency while maintaining a high percentage of valid laps, on the other hand, non-learners exhibit low lick probability, and high lick latency with a low percentage of valid laps. During the transition of different maze-length environments, learners exhibit an increase in average speed while non-learners maintain or exhibit a decrease in speed. With combined information from these parameters, experimenters can now focus on using resources more efficiently thus contributing to a faster turnover for research.