A central goal of visual neuroscience is to understand the computations performed by the visual system. We will have achieved this goal when our models predict the responses of neurons to arbitrary visual stimuli. These include not only the simple stimuli typically used in the laboratory (such as gratings, spots, bars), but also much more complex sequences of images, including those that the eye would see in nature.
In our laboratory we are working towards this goal. We concentrate on the responses of neurons in the lateral geniculate nucleus (LGN), which receives inputs from the eyes, does some processing on them, and sends the results to the cerebral cortex. Predicting the output of LGN means understanding the computations performed by the complex circuitry in the retina and in visual thalamus. It also means understanding the input to the cortex.
To get a feeling for our stimuli and for the responses of LGN neurons, watch the following movies. The first is an artificial stimulus, the cartoon Tarzan. The second is a natural stimulus, a CatCam movie. CatCam movies are captured by a camera gently placed on the head of a cat roaming around a forest (courtesy of the Koenig laboratory). The sounds in both movies are given by the spikes of an LGN neuron recorded in our laboratory. The little red circles indicate the position of the receptive field center.
The results of these efforts are described in: V Mante, V Bonin, and M Carandini, "Functional mechanisms shaping lateral geniculate responses to artificial and natural stimuli". Neuron, 58:625-638 (2008).